var/home/core/zuul-output/0000755000175000017500000000000015136673364014543 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136703220015470 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000243231515136703136020265 0ustar corecore^{ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD  ?+YEڤ펯_ˎ6Ϸ7+%f?長ox[o8W5d% oo/q3m^]/o?8.7oW}ʋghewx/mX,ojŻ ^Tb3b#׳:}=p7뼝ca㑔`e0I1Q!&ѱ[/o^{W-{t3_U|6 x)K#/5ΌR"ggóisR)N %emOQ/Ϋ[oa0vs68/Jʢ ܚʂ9ss3+aô٥J}{37FEbп3 FKX1QRQlrTvb)E,s)Wɀ;$#LcdHM%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt7L"u 0V9c  ˹dvYļU[ Z.׿-h QZ*U1|t5wKOؾ{mk b2 ܨ;RJK!b>JR*kl|+"N'C_#a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)VxX帟AB}nyи0stĈCo.:wAZ{sy:7qsWctx{}n-+ZYsI{/.Ra9XcђQ0FK@aEDO2es ׇN# ZF͹b,*YVi+$<QMGhC}^}?BqG!(8l K3T[<~6]90}(*T7siv'=k 9Q2@vN ( R['>v*;o57sp$3ncx!>t®W>]tF-iܪ%GYbaRvHa}dkD̶*';ک|s_}8yj,('GrgTZ'U鋊TqOſ * /Ijo!՟8`"j}zӲ$k3jS|C7;A)͎V.r?t\WU1ojjr<~Tq> `=tJ!aݡ=h6Yݭw}?lѹ`f_" J9w4ts7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR JcwB~R2EL9j7e\(Uё$׿atyХ?*t5z\+`/ErVQUxMҔ&ۈt.3;eg_O ξL1KiYLizpV:C5/=v-}҅"o ']쌕|tϓX8nJ*A*%J[T2pI1Je;s_[,Ҩ38_ь ͰM0ImY/MiVJ5&jNgBt90v߁R:~U jځU~oN9xԞ~J|dݤ߯R> kH&Y``:"s ayiBq)u%'4 yܽ yW0 -i̭uJ{KưЖ@+UBj -&JO x@}DS.€>3T0|9ē7$3z^.I< )9qf e%dhy:O40n'c}c1XҸuFiƠIkaIx( +")OtZ l^Z^CQ6tffEmDφǽ{QiOENG{P;sHz"G- >+`قSᔙD'Ad ѭj( ہO r:91v|ɛr|٦/o{C Ӹ!uWȳ)gjw&+uߕt*:͵UMQrN@fYDtEYZb4-UCqK٪L.2teB ˛"ո{Gci`du듎q+;C'16FgVlWaaB)"F,u@30YQg˾_YҊŏ#_f^ TD=VAKNl4Kš4GScѦa0 J ()¾5m'p/\խX\=z,Mw˭x:qu礛WԓL!I xӤ1(5AKRVF2ɌУլ F "vuhc=JS\kkZAY`R"Hr1]%oR[^oI]${&L8<=#0yaKL: JJl r;t#H+B|ɧJiM cm)>H=l}.^\ݧM<lu Y> XH\z:dHElL(uHR0i#q%]!=t_쾋-, vW~* ^g/5n]FhNU˿oۂ6C9C7sn,kje*;iΓA7,Q)-,=1A sK|ۜLɽy]ʸEO<-YEqKzϢ \{>dDLF amKGm+`VLJsC>?5rk{-3Ss`y_C}Q v,{*)ߎ% qƦat:D=uNvdߋ{Ny[$ {ɴ6hOI']dC5`t9:GO: FmlN*:g^;T^B0$B%C6Θ%|5u=kkN2{'FEc* A>{avdt)8|mg定TN7,TEXt+`F P |ɧ<Ғ8_iqE b}$B#fethBE;1"l r  B+R6Qp%;R8P󦟶Ub-L::;Ⱦ7,VW.JE:PgXoΰUv:ΰdɆΰ (ΰ0eTUgXun[g, ׽-t!X򴱞_aM:E.Qg1DllЊE҉L ehJx{̗Uɾ?si&2"C]u$.`mjmƒVe9f6NŐsLu6fe wkىKR%f"6=rw^)'Hz }x>1yFX09'A%bDb0!i(`Z;TyֻΗ|ִ0-6dAC5t[OM91c:VJR9&ksvJ;0ɝ$krogB= FYtЩOte=?>T&O{Ll)HClba1PIFĀ":tu^}.&R*!^pHPQuSVO$.KMb.:DK>WtWǭKv4@Va3"a`R@gbu%_J5Ґ 3?lm$K/$s_. WM]̍"W%`lO2-"ew@E=! I,($F{ձ7*Oy 6EK( EF #31J8mN .TTF9㕴/5~RxCe,&v3,JE- ZF5%Da,Gܠ*qI@qlG6s푻jÝ$ >8ȕ$eZ1j[h0SH,qf<"${/ksBK}xnwDb%M6:K<~̓9*u᛹Q{FЖt~6S#G1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YLz_s߭AF'is^_&uUm$[[5HI4QCZ5!N&D[uiXk&2Bg&Ս7_/6v_cd쿽d@eU XyX2z>g8:.⺻h()&nO5YE\1t7aSyFxPV19 ĕi%K"IcB j>Pm[E[^oHmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PV?Yg"ź)\3mf|ܔMUiU|Ym! #'ukMmQ9Blm]TO1ba.XW x6ܠ9[v35H;-]Um4mMrW-k#~fؤϋu_j*^Wj^qM `-Pk.@%=X#|ۡb1lKcj$׋bKv[~"N jS4HOkeF3LPyi︅iWk! cAnxu6<7cp?WN $?X3l(?  'Z! ,Z.maO_Bk/m~ޖ(<qRfR"Au\PmLZ"twpuJ` mvf+T!6Ѓjw1ncuwo':o gSPC=]U҅yY9 &K<-na'Xk,P4+`Þ/lX/bjFO.= w ?>ȑ3n߿z,t s5Z/ Clo-` z?a~b mzkC zFȏ>1k*Dls6vP9hS  ehC.3 @6ijvUuBY hBnb[ Fr#D7ćlA!:X lYE>#0JvʈɌ|\u,'Y˲.,;oOwoj-25Hݻ7 li0bSlbw=IsxhRbd+I]Y]JP}@.供SЃ??w w@KvKts[TSa /ZaDžPAEư07>~w3n:U/.P珀Yaٳ5Ʈ]խ4 ~fh.8C>n@T%W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"Icƀsu0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,/4Kwm2PvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓٻҦ62L0ډ"ܺ_z9JNȯ=@oUI y4PE[/Y5d{zrBܖ6Hlc "mKv~[uLU4lZ;xEN'oI㤛rP*jC# 6@dmHg1$ʇȠh#CBΤ{sTQ{%w)7@y1K^ ].Y$46[B-3%OONw8d`Q4d$x0t8@t]y1T\YAidtxBG:pɨyeNg4n]M؞ e}Wn6׳i~'ہZ*FU{fXڃP'Hl4 ,ŸqMHDCYZz Qnz܁$Jp04ȴIL΃.0FiO-qy)i_TA|S2G4miBȨHM(2hys|F 94 DNlϒòκ-q|xC ,gKDzHR%t+E/wd#礱ºȄWEz o\JξB.wLKZ39(M +(PWՇfR6#ю3Ȋt ݪbh]MTw䀩S]'qf&)-_G;"1qz퇛0,#yiq$ՁɄ)KٮޓJ|̖D?:3mhW=rOf'/wѹ8BS8]`;=?,ڼ"ϴq*(A7? /W= #^ub"6q f+=^OI@߱^F[n4A#bYѤwd)J^Z{*ǥzw73LuaVad=$6)iI gC~.1%YmҪ8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tۇ"Dp"'0޽5xCNQ1G2})*'>fC۝'*)"5.E2IeD 2.ZdrN6Uœ=n8D-9޵JKw5ُJ,􋃓ZUꋼ0b1f87GՂ 1t_o}{Mr7KO0Ao-Y*Is\S:JzA(:i!eҎ\,f+,Ąt78~ڋ~?[F^.A'!,iGow3{'YToҝf5ޓ[he>=7S8DGZ@-#]f:Tm?L{F-8G#%.fM8Y='gیl0HڜHLK'Cw#)krWIk<1څ 9abHl:b3LjOq͂Ӥ=u8#E2;|z꽐vɀi^lUt␚ɓW%OVc8|*yI0U=nFGA`IC8p+C:!}Nh,mn>_MGiq'N~|z`|mu}r:"KiyGҪ$& hw#4qn?ܶХfm_Ov^ܶ[6j3ZN9t9ZMMM)I[Rχ/C|W䳮yI3MڼH9iEG&V 'x`u.̀ab7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGh ܆hR ]zi2hB9'S_;I/d0oIU:m/~[*K1QA="D:V&f:{7N>^uU` c/X)mS5KC߄":{H)"%,!3w{"ZWÂk>/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*S"p^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȯ.vIH"ŐR ;@~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dT x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{S_uM Wi·yT"^'~i6֬:v~m!҃=pnUגZ6p| G;;74^l{Pclwů Հ}xcSu)6fbM/R(*ȴd.^Qw %"=nluOeH=t) Hİd/D!-Ɩ:;v8`vU~Ʉ!hX #'$2j1ܒZ˜bK@*`*#QA 9WykGk,8}B6{/) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E2}`pNU`ZS̯窜qN8V ['4d!FmaX-6 y:1V(!L7,RPEd;)QϢ +RlWDžuF7LFֆoM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; K>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;0)N.w7ߍ|+qUߤ^oå~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb uw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI~'.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N+;fwt[wL X1!;W$*죓Ha-s>Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷(ȊD0M^`MDN74Т C>F-}$A:XBgJWq&4ۓflq6TX)ى?Nwg>]dt*?Ű~{N_w7p682~ =WBX"XA:#u-9`x 92$4_>9WvTIj`+C2"s%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>IdA Od[jlec=XJ|&+-T1m8NP$%s,ig\Z:h Ћ߉n!r}_\ \5 6 d#=&X^-kOwĝJO\Vj; )!eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%vl x٘3܎y,< )i7 Ո: tC`\?c%v7\Ct!$9iç$><+c~݊lz1H[E'2/clQ.I`AWOlw&5fH n`gMytdx)lwAK~GgbJI-tq5/i ?WǠr^C/1NEU<=co(k0Q~wˌ\g,\ rf\PUH,L#L7E"`0dq@zn~+CX|,l_B'9Dcuu|~z+G q|-bb^HcUha9ce1P[;qsA.Ǎ-]W‹y?ڕ^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;YKOIXA|i޵$"弄q2cg4;&ٔx[ؤlSERnY$Yl]X]]D8%Hc8ƻ(*Lu`6E**IJǻ|Xՠ`:0~ƪ%c;,XۧƦ;Hc"x"ɯGk=hb_^7$RKmXr&Bf~j<O`R<(**tF bqxXU^0J*ʷA~Ns&?_WW%zK>>/JOW8(M EZS?^,We?"'9Q_UI,y@LlU W% Vү(Z1=|d[zl# [F&tr>il(Xդ%<.G_"1?UT5Ǵ'~>iLK]>OS 2*`DCXS^%"_weBn=s.wVE6/#~̋oL/'uRGet]h^ǂ >Z/ypq9G{X, ޤIě+XK>>&aZO wO>O k:+{ Gǭm=PIPko(x͆-'r*ϫ}bRʳ R w\Sx+,]6,YyPqu}[g 1GbY1 ӷxQ3 /?;%Qב$κ}$7bKx-yu_?/5 v6u^')^<BC0ݮsg18[r=c_/q`މr]U%+2J]%5B$ bxY% ly)DoY`/G9 sUr=g]}ux\%gy)ƤCSrĈIG[G߇q[]Y9@VOmGqUPo˼.+25 N"H&讠6ߟQYew5/]ƉXpA˝YCviddc*Wa`1rxT$,M. qzj^iPymH$JܼmzϨGn, cSw >BqWs vGInGuXySEuA~G!Ï6 ?'U j j$ F%5@:!"!(աe/Hg)l5l M|?n/O#:* [o';zr(, q $j {%I~C8$ Fw}/,ϮӼ ^<<@5Q.6K(%Ӹ:nnڮp\ʮu ,"]tJnˁgAWᏦЮ@C|?@U2D.\aFcϬhRk&c LC h? nq61Q*huMN8m3:mS*ysC,+ص-mH3XVyUJ>TM ܯ XPʒaRBډ8Ca=[^?O˔2Ju>zc˦4권:~] zbPRw@%B /V](H1B}3W`XƎÛڋ$*ow0^',N\< 3uɕd:ީ)avybw3lG+r@2cs!ٶJo |:G D8r@+Vu} ʖkĒt$ITSy(+6)Py>N4̳l{)˼=uhpJ3%Ē% j,1e.OXW.(~75$z0"2CV;E8[b}b=Ht4")]:WM6:'ySʊF$E9y 0dӼfZ36d^Mljː#km wN.JRb N5D{x~ T#>m݁hK^xQ Vxk)c$qxX.(CY>&h$=uNF.BVmג2lTV4X`<ϣ嵰V!ycx{]7V4s5˃BYSUax%L`L3m̋"nz+Ŋ<5?p#O\3 #py4p-9oҐ!35{dOݬ$*.[oQ ɔ+T$\wP@m2gl΍ S%}eLY)y\o=EHґ8dM#A,yX';Wm/&M.FOkSDbRѯEM$#֣'Ҕ DfʼCs!"][FBO.ޏї:io@^d,-WJ9ɾa,Sa)6YBi ;y]AڱCy؋wpQ'I;Kv{t]h/s7q8%+A,t083Y>x\Y6ndso읰5nO?}E!44`!LK豫'{pf6:MØP#zj ܡ4~u4W5 ك xa4(f=?E0CNGpcd%br&|ʀR԰6@D Z# k#7Y11 X\}a16: Ay?|4y&5SGІ(o2 s% ܞǒ!0%އ$ :(FDe0~L` i 8jb߿}0u0:͈4ѼG^n:9yp\m s52hxYgs4~E in(2UUI"x,E #$PÇ# M ENqXb] usH/B1{ʺrŇ7ĎU3 a*/(h5Pd Ad״0"PzDRQfڀ\E(%|>QCaœr(4Cp >apުd'm҂(b_:){irߔk _1}(M%estݛ;M SvWV޵0#鿂bn݊| xOX>d# Q%ArN Q[P ʝP# Ln3h }P\ tbV BQJ^> a=*(<YDxy4T: Dݤ#ؾ;R`>sȼ8#3qr3uev{FLb/Y 7vFȝQge>  s9=H7ΪA\H%7Hj8 ; Qo9gsQ= s?M̓ڏ7} Y89Qr/uPui;^\ɾA}Pg) Γ|/g9/e\no2.|Goq&o`.c羁imorIg]U4ߠNz?zX Q*=!q20Jw^8ZPK&!`JXJKrWU`p|A{5GzSs R\}C͏O>dTXˤMH;Z|.TpN%E>pF7 0;+aNur!:rZt^dBI jì`, b[P(/2֞oeQT-b0v< |x0XI6UmJm rr鸰j6h3.bԊP->]1Rm2ob12<W#D95;@nl&iW&=Y=(_5땚 j6 꽇g@Eh2JW՞x ޿@YI0pG_BP-/h)Y`>&i2 (@#S6Z1Xu\.x4`vc.eױy.ިG~=tl_5_bA邸:lSjA/_pU`Hyfiy[Urztm}` BK C=,'䒒J6^B`:LsdC-FD9$&JnR\K2/`Ϫa9L˭ :KDÇW`ϒN؞1{&$Ʃ3ģ,]f hȦ\v@]9$!숿^@8^;1ta> 7;< hɄtO+ҽ Zp+%Y|3C}Q\e;1hT Ť&c؉jJZ6[BQ58S M;o5|fp61C#oՈ p[zFfSpVP R,ßmLdvd~^&@zPOl523@P898K>U#?zEޮfC6(^:KXn;@:|VVט&"l'%no4Op-y|R7k// |/Ϗ+,1 Sr* O\y y8vpac'yڞvƞ=~PlPlwBe$@IO#}$zz;N4BG@($@(ߝP4B# ;o@;I&<[]%%֨q>-,7dh󼼚,6/Bo5 ZCyl,D'E]Vc\ˋr$yܓZw?!#dm{hkҽBTue1 =o>б= 3d|y/{L`w^O`x,k$`QuN~./4zwJ`75!?F AbW;N`!*l`~*YT)ƅg"c-2CuP޳?K/?~ WE/pP3m TvqݎQ3Q qME?ڮЭV7.G ֻX7BV#<]q YDfY7 -i ,s HխQ )FG) ϰi(=<$l<)w+(GDϫ@ ݩ%vEd?>%C p)ᣯ u;{i*$) 8E#i|ns]qyu`O*qTfJ0g-L[VBCM] /9\%\%n]ȥXx*IsX@3INb&:̝OC >@!|?o A:e“!|k6|X/BRBOwA#wމcw@PπY!؈N0C*N,/@{!J*p [ T<*".fH6ωV\4qLpͩ8;f۷%噲,,gMTy u<*g4 ' I d(cʄg,DBGi oCK09,d8Җ(t!]a嚡N9Tf|jm˄ZDf~CJBmEbfbLw*TY"6NR6Bؚ'tgE [Z=uCcV"穰!*G3&B+vxZt^S`g-#n(&d(?_oصZ&K  Z歙 qNq,Nd`BLBr0o k6@ h`еY}]P`Uj?.5D.rVIY{C؃,ɂ0㸥_"ˀ.aϐ->-p~|OBɔJ$Cow^ 4S`Q&Z91I$S.>11_1QX 4;%h OCt1}uFwV|? YHO+7cR0M"ʅKTd VhEk]BUȅ@'ˆ_I-HOZ-h>@myrh昣#\.uR?ߊ4~#\[aL;6F|MCǮEiH;EY/mayAr;PIm?uVF{A;dNQuaG7WʰU%fX,% 3H/*1gf!afڅ0;滌nENSghM5uz=2r 9i{3@Q57foa {ó#x[#g"h]cHעtZP1y/39 .% ֐t9E>sP1]60둊(굅 ee!z1g' AsHYy%s)3}eEFq~j1QQ'V7#12eZ5;S7=B=d k=X`Z4F,A5\oTQܝM^ O#F=FӥbZ9K6oztj)wN,ΉܿV5ʈƅc`.׿9 +}Nw {ﳭphtɶI!(1Y $*^XޣPI{t\וRcگBkjl^".y*n؝­ :IrŤh4@,6 pP~ErVu\~9ڗ;C %d_"X3Fm#T"X" y'.|H<򨝻`=M!{4%DcT:wUH-f4lޞ$@mH_j +WhJQ`*e2 h&Ir 4x 'LUYM3)"IisH7Sj1Nz;g-ryJ>n3!3#\IѺՄREy2ՄlmؘZt oUUQ&2c\Eyy#W2 :.!ƞWq;Ë_o Éz.u\N Q}#/! ?enc$ IbŸg' j胓]`= @|E8rP=.P7qTgOo144K;_w)?yՊz^Cbs)P 8O3_(`(8!Fr?cKm@gsΈ*U)AP56ִ*\A B%8jAy,8Z=rI'p?Ӥ 7RPƦ[ ܿɖneΩeʹ^? %IAVݘDbg?7?e-iOCϹML /K>a.3 a.zSu檬@@7X&=~((E_/[r4]J|yc%8(n#~p):B6J"D0(nd2-g01 c$aCpTTsOd-`D>C4#S,q؞:%ZU< ȈͅhQd(̱-.!^U\rӤ#l* q.:)Ҡ"~1QB0Å$΋b L?8hp GE1r1LH֠t=WG _D\ Ci) YbZ 1٦f-6n3m~1?aɫQH r{09孎J 7ƵF|Ȣuj߈bƄ07 ݹS'UD1yKzy NS\^JbIBø*ٚ1R:&jy-A^W;˯x2Qث?Y[T2v#d'>e~\[5lƎ8ʍ;)Y`Q;悰V+͗P,A_oYp3%|d&x_n< YͥE$EvCc8eh&w^+Y g"hbjn8aҚ Sj"qvc_- v.7lФ`SdsB/o)fm>:9Ԙf+&g^%ɨ6uьsФlS\*=E֜S*ֳhU/ݵ?Ys݉HތS#2z4g yMmX:;r1oH6:Փ88>{0UT,\ e=]Wl>g Zg0):|j%bLe'Kg8oYKK'IՂ8~{=☥D^ːIk.*A4aTj?֭ 6bݐ?g Byo-ZIX:'ꎧDpU'Igu$xt W";1IRL6(.ŏ0ꎅ[, lZCyA D98GF#X:rK/8R/?xeB} P83fXQuFGfZWP,E'ƱiNaڈ92s4zw˂^ G0#,Dӡ45a.pR%_, DWmcUuAmiXQ55Y.Ms)+fM2esV~޶`7VDZ`kV!iPN7xTn|WV7uT׳g/m<:bJG'+/個Gဌw2>!-÷um ׭gAHI4,Wz.,oD2^5^﷐,7MfJN7T(nrb?2 "CNY ]bWvWdQ1VB% 6S1,EvTQC\iW.T%)q})n1(}UYMd'0޸itFꆟƘH _h vy,]u,8ZUN#/;kq !ew>Ζpg1CAiȶf<(UI{ Nԟ,) TlT0N*)pKBv.#)g?FwU՟#{^#d/!Sֵڑ* UA*TIVFѵUm\)܇fszs DSmvcU?-o;2aÂ+P01`9eL:GR2奘oؚ1ڑW5*viRgꑝqH+e՘MS< U'7]Y$b v<㜶x|GڏJTyKgyg7GjrO"?<:eڣÏaoҵY͈ĈM+Iz)!X7qmܖz^mYlKӜC#3' L!jr<6TWζ Sl'gQn߯glc,)2g߈?Kgtgw QFN=q­Ah 4:HXX:OTʂdK7/, z l|3W.kB&҅/rY<^!+R(l6A|ǂqWqxӓ/wTl,C.c}Aq-:Z0 GUK" Fa|{;۲Y=`, NE]{CKDdًٯ3&~ؗй-՗wWnv;WCsg=Ě鎗;qIh$т;!x5m_uBonʿ6_!kt@O7__d7޻C6# e6%Vf!uQ!iş|Q"c_n֏9ǯߚKqij'gZW|߭֫݊%oH&kDs7yYu &C%*Z"_X:/dKщ,to( ֣3c4N,<3Mt8L!6d\Nh\Jc`WQ,oɼwT&w\j^e|hbiY,ZG˟CXŔRH  lӺmH&Qsp,z-,Ź"Eį 4$[2zj8)'lX`Ur.'Wݜgb@7O,8N&0 jUߛa}euRv5|eQB0qoxDvZ-b,pn6=ރ-U8s62bRKG ỗO8{?=bݲ>X!&W.,]lE -q' Ll?F)v+|Z= 'W&ˈ<,g!$&7}䇼/]I؂6RijBۄ XnTV#T_=R]ix3 ſoNƘA*gD.ejw+59Ay(f>iK8NQ;3 #d{.SԨTH$|\gdBD/*KUS`5PhiQdܪQM17AGijQ GҮ3wKgqr5 'Iɻӕmһ@Ɍ~!zYSt)UD |4YoYp 7nN-k#NRW`zi,V UGjDX_<?31Cu\ʷd^nYȷ*yeШjyߏCD@oFOhjh&iPTU 4P/%$я; QAOMBzj0 <Ĵ2u$\uz^ĩJ* $f21 BcW)8U.?%I |I6G'0K8}9rz)M,U#I-ٿ+ߌ^gcxb:b~@ZimC{9=uBxP/~u@^ GCߨ~V2p:7s8<4FE#ל C0䦟q4\ Fʢs<@Bo?Յ^IuQ&uD/oo5ƪ6as^<2.@'=O>ƿ pIx x4QGD"k{oK"L"HbkbYה9O.7О ( T]'/.7͚%fPs|mb<19~<>=5Xp |g oY'ɅNHr'>lMr7|pK=1OMmxYgQ8-2/O0:ɇ`~\d&yD^ h G'ߖ/o|\L&E$d?IaMp8 #>@_|#tl@ $˄cȱXx/VCx?o=-6oh@ ;@\<0aK^S#l߯d&pBhNphݻaa2(4Q*P!:8g]ƣADZ/~>xibg1 eu{Ysϩv͚K^0s{g&D#R(ɦe`@4l#t/fD;Z *OuhxО2kj T]pTUuetoSu$ln-9,]s&Q3`)N`⌃eۜGi!Y)/וG\yyW;puq~pޝ(;|i?챒|]]jߔ 9NյTaı̮\ݩD[U~np LE,6X^u\|Hڀ仗/ށ|yL.<\/?E6؉y4N=(C/]͕,uf\|۟/@I/]xY3gaCښg΂Zu&sƴD-ϜV{-PZIk)]Pt$QL-%x菈[b-TI7$2#ob2&~\u@]!E:E .Q PT.(-bT4U.qh3<BtH@`tg!xww*5*jOTQ)nJqUtb[ڠ 6h!! wL60DPw!:`z} N ;aFwByw̓w)XiE;$5 Jnu}P~K$8 99/̱MG^LS'c#6gG qi/J)n\=~SfSdz4™Bci:C<˻cAZ\&ar )وiIV*qr#bs;{=4Yr٨X"@I 9>h,7 ݲF܎ 8,Pt90_~gW/U)agX ai\9ڃ+kG/̣0`DiY!LYpG"@kPRû s? -cr3l(@fRqV7J~q]݀tno7 pU !8ChUb#.qcmNNvs|iSHxn%v9^9ppy -;9ڰ%jQ5Y- ;֐?~'?_yKہ~&n̺sT@oy '[x2JDeOaL6n?9UvyvkKO@@1$U=Y?ɵn6@y ]"-䏟*lE h6z6WLb(̔K{мϓwoҟaݎuO馻ug nz \c/}"9ew*@\v&(.t] lQ"K*t-xtlкՒ|~tu{(V;;.=J" _5z8X0XPziFB3ZcbGʡw]ϽL$ˇnȦœá }|jIc?B7{vŠ|Ń{}ߝA; WM_N&=tzgV,A)hp 8 % 3DDen,R܂9Bӿc0jf q`F+f4ai6ޝ-vΏs_z훉w`A"P`\_{=Jdo)!0GRJr-I)q$)dȧA&C.E"b ږiE,ehSF3V5^a2E^.SkvsaR Ɉ,Α4)2[[3VH3V@3m"" C>\rτt9php;9$нimhXQ3hX5sdk4cդfPvNgs*84BhMnjAx!*)wX5iE4i്6m՚1THJo"BᏭL`'%l$.aIjM6 hطI_a(M-m_8G)VѤI͓O~v<ŻAsiϋI 5Ơ5"RNjɇSCF05<G ?9=2/Vv? 2/H9",AiZ!ψy`g&GH%x9XI@d% jh.~x7lcї<"\|pxIzW6+/F|$i\(dSײD9}0ȫeuLLITν2[jftHtS ӽUs*hg*\ :6 t|)b08@B_³_L GKD>ov .)ٙvCEFǸ,džr|H;5{ܵkۥB&wBo7|jFKomrRB0PɛзS.ޕU!~$c`pF D1nDbա @ JԕV!Ԋ-(q5FigJ3ZKF 3%KjIJd|o"|e>]/@Ӳ0&@Ge ȷ>U"J%hTFb GR<nOEGQ#1fÁ7鹪岹JY:=C__<]ƞ?n`\2dD(THp0"lgX!dH2nI)5Tu"3jXN_c?7qGa ;sR$__-K' ## >vpδqA#sL0␴)XLzuQ:2D9sw>f#>Ec>7 |t3>ņѐ8K9:dI a ReNPNP橖BMloyOz³S a`O/hJdOu Be3Z cfTjGa:M㚤l0cBqnHjCHEiM1R#~34!i-D9[3@ 1@  jO { p,U0<)rJc@II {4;U8ν 2] g+2a2le3S5RRv{s" +2)$,OXP4K5wmy?HZlm$)$(,- )ɔ-2\#+Q$73gce9 NATCqVC9PBbHw@?#NKuuEcN(?\Jk"nD 3kOoCB`2Z Z ^;BrɷE6T4F?q t)qj$fA)-U:uJ 00(fˆHNH̍Q!Ujk+e0hC\hBYh4ҤuF,jܶcw$]I -nF"ZUgAaiSj uOM[TR :MBӗgR(*03<+JYjw9.”&ʗ aK-|yqaWwW9(ָϊ7*?~O8YN1fqA=Ҫ-[(&;h\źDRdOje|Z^ZstP# uE"%~?(UL LNXK!4x `?k~3MBH|'&0mw0ve2- zQ4p]?ڗR'j'F`> =&Po7-_\XӶ3$eUk>="!9",%++JH57>zb=Búpuf1",i{&OI8\kaҜ~SOx.#Ja &-$#/kXW;柣te1Ow_׾K2xM{$n%`[+%_u 1ʖ~ đ`"Lb)b_ūEv7¬]HyCU4sbf!yYڏ3{Њc$)-F&K!0ko7pvK{>"<sD>w[!wh!9لX !hҘ֌"t8a<}09 JAl?1U `:<ܖc&y0y.tB&Bh>zD5Eh+vcHߔoN|ҖxdLU`2aUcܶ ; 65(BD ]3bK0!$BpެfJR$L,H*gy%`8~Z; ⭳ pɊ_)EqeK ޏBhvX-k/JגScw/9WZ]ɋ𹕔b D%>bM@Px&8dWjV]ՙ8ˆ//<?tUL@wDs3cgw ȏ(X(y?~soG3G3On%:.5gPR`?ÛW&KnB 9z `IΖX~}S?/'&ϣt]D `]/Yt\h\,KSw +.f9lliџ 9k߬UϢďY!5st}KʎEF y5*_bE#[KLZ^JĪP|P W^uok}(̋F\>kҢ'kxL˵ ~oFW|.D.*q;93:: Hޢ-FA,~*ZiH\[Mzjq#H!u }-c.ʖTw+Y6ɇ_=!nQ|[&NhC%(%/X=(/a^;W1kOyvVn5)1݋jjroq dzWkɔ }bT"zNakl/vg^E&nPR6Y}c҄q?_ر9fbf9R0KX1M,-[M&tQMָ\'" 7Y~>bƳbqqnJhF: MTa[7~J~f́0,-6b CaîGlvs3VoxyIp\H$wQ=ZXoEqzokTOr D/߼^ξ ==Áa2q ɣ+ T/̣6M4:E}q-4CMj˜lFJ?0&Z=|"գ>ZmK⸟G"ג~1i oDJ<Шd4$> jw5!<9`6L)npaYt?I0K0y$ѷh !*eBv 2J/+1ް]N K.G聯 MjGy O /u1h#6ڃ4ꃮ=xk_b҈p4N<T KueM> nMD޺`Q֦lg/輑MYb>n <õfc)? H"y\J* K>S9ƽޡ,7<R6>&<*,Ƕ٠-|⇉yn}Vlj^3/oXܵMo3[h"|~^h&ޭd- x0v9YML07Ӽg6ڼfOkZ~si1xVf晍m?iaa}IH>oڎ\[l1CGUvQ׆r6eJ@j[+ypo[w9̒Y P:  bUi)d>%V eMX*a5Xś`^'Ѓw Dw,HymƩ,]g~Q\.'U6l9V,)s7p7d1o;6~g(\[NS_//:~_y?K&j%]յ\i{|{ V%stZ7[% 9m^R^/ü$vJ[1IY.~d=*?O',$y>t??ƹWVy|ⲷU\+t^Bzxj&T%w& 1;}uy} <ȖoYbt<}D#-qn?_ܺ5(g"$ҪEh.~cNmD72?uW z 1 v`9LahGՆv\36=m+'ꗿs=x`J_>7v/"vuZ/BfnBMu[ɿH6ZsaE#Qi@5eq* UPP|bm~ҏ^}ְ8P1yۛޱ,K2eXӥ0%HV:ʴY#J׷3G>ɘim ϟbl?ykEvxO,Q^F Vi  : v1`$q[d <%hQ*4iW$E-FC07PQdYV?5gRkhg7AR0~/ᝒ]]aLjW8v׿_szkG ĥz+&_Z׿%-hdF4}] Wym;:P4zy~\F|жlzǮ zzzz"<)'OGE%_wlfiWa-Nܯ|O[4:/_yn 'ݮg u><6"$(f]ΓS? fS׿0NFO_.l/7|plK` ~gD ,S!:yn]I!D:|ŌG>RIϷU-[L><,SSHR(H@v D阋aRUcwT#j&S7-K_5sń@D$9gs5y;#r qcU ?]JhHLFS"ZJMItQ.|rt]w0/A CHD`2yu8-IJ2Hɑg!#u7dHCoUXM|Ct\5Hzp<a9H1L:Ǒ8-W}5{dT=ɥMIUnM,T(D2o6CRoc =2*G>Vz>_r-h)E+|\EuV]Ǫ3Xb0 9 )qL"u2' |5{ds)/@J}ɤF!iVj-d!i\"B 0qM[r`zL'Vf:\UaQ+n'Vy 75JSmt܍QcԴ!EJ&k!%hMwyʷs]?"]l+ldJ 5@+K։qP9GG8*f%o*G=2j̍;-ge1;jƑU󄉼";T[QwfݶM&/7s-e,JPDo .q%PM1rf莚qcN켌o($ۧpA9}#RN&F_e&9=lYIGj3e-DM3N4(#3R<$R:'h!b\E>VՔ&k\ kIXShS&j{rӘGj`l.}FȨ11Vi.Or>sр8 0˜$' Lo y¶ψ]"[]_I-Z}8`q5.a/[R-AUqWDm )@Jf[.O+{-@ Hs:_AWGʵl{pTRHIԕ#[́"rjQ")MX &܁$ǐծu/"̶m՛U]l$OU+ϒ,QiŋbH 6rW)l ڐ)L#W 0pG|(`fyʷ7#v({YoO萺T5'%p%JydiMd.q%Ik[حA%O1C 3`I3V ܅&g5^2ӆf'`u `FwS(jl 6 |.qŒl=L1- 1+f‰(̿@Vij{ʺe( /09aPkOa@~XB ۤZrp($  ||o'%%_-a7m'"8i#箩fDjɈ V˩S2OuժBYkN7,4ﱪ\PbAˣTaǪYw F5C?5a&:y}-Ƌk 3UeP*J5 JH67'J;%aw@^z9ٌ=qEծг̋!nQ# hMU >)7ÆQ$g昣%KUqE>V˚LX%B$Ӊ*8 |y B Xx,o HI\߁qy =2ڽWǪ|Q .V N*5}x ^1>ƫUFXs]B*ܮ7o1,Y15zR֣]4}}I~AJ1MT=_}- (SD3tV3`7ޮH!^F <E>VVpۙ[Ҝq7 j.2ɏyf i[ĕ\sDD҅ y&=2n|ʹ)AuTػ oEHX4:;a\9CoOUXmw&#$0 E7AնBup#®ښjwQ=`/v~e^O@uzBe*`-+l܄}Y[oWE!YbDQ bEU6hAXbY㭗y=95wyx?'F=2VvSUwL^sn2{7fUu}DF_W]) Eݮ?^=U{S]=*lH鄏Qf+lrk.5\ x ӇGF=85󝎘(ˀkV/CQx;Ujoq.+G_q@n[–'?RNx¸⋄;:;q;X=2*Wg'oZ}<1*$rlwYrc%i\kn46yIR*, sSkO_F@lE>Vfb8 A]#.|a!gɻ Xw)Src8GFrC%vW "6`bۨ$JL-p{;3E>V;x9Oy}m~6kCHO\$mO)6Vx=2*ǥ3>$|dbksEr(UtW _@U=㨱2L15[ê1by`Y V_ELoEGgSU=ǼZݦ!I:Իm'~̻Sכc%[*?ٯzea(** jj4ubYR^T$x%>lj'sYW0\*-;_/?nߦ[fmc.ek.S-ZEg˟9[EH ޸,ŔW^$ݬ|3Q5%%*~WX-rOˀʨ=2*n5)3akg $~xG}yG?!gSg8EQY\*zoQ֢(bEw.5vYWpO?-c)ފQ}V<DŽSۯd19QiqsJ w[k#!*E=F>E?g6T ,:9z9P};wYsS54!FDdӆ Gy0/Cw{kbDc3 2mg􂨬Hrm'd{b$6DD,G#|1&C|X5iVҀ.Y'NgqG7%F^kJ1v9>]Un刪#nwⰑ"7R q厢e96;D=v:3uLwYϘA߽qk 1Zlx"}gg<]JhE$cVADXc 6|`_*i>(<6Tȳ~Wnfk͈"FWn?]+Ynk*9wm]bn|@ sDY,rԳW\wYBʲ@0 N{8E.Bb˗2{a䴖CV_}»XQCCTY&&}§fv{kbDcwg)g6u5qgWO|_ =L7ҔZ e1V"i:w% /'s47jKsG}8B;IaKs4g8:<ڊ:??F9Y'fH6-<ؽ&<9" cD!~o0#LCBg9}9ًܰ^\fF5j[?S΃|gr6{ѳI1*ӫ Wcn=0CVTP0t@5!01E %f,BqvӭtK1EC8PuL1m^#vifo>{RF39W)f3a!xH#|~k^9=|ŅמH[1aӏ6!+ 9@C?lq#_aɎ}(b>GاFԅnNI*IIL4փBM%* Q,ox~ },8 $5( oX7Kqӟhl">/M. #0KC㶇Uu#!=.MGS.WhttpFOV±ߪ7PY }PF {I"כn Ljۿ'czM"ADpº8yYu5 ]Ţ.[~~]T-8F 6:؍ט%f2P6OZ{c d{$:hA뱃v"<ӷV9xeLa "%lң,(oqIW>~=y69 /Lوcab A2-4&XOØ ^r|%AYZpG|DqZʴ$zC5bѮL?$Kނcz JH@A#ilfyO;L^;fDA)n]|lxZ h.aC]EuC%:=+C.'{=lSOKu $!,\nmb: GҝXw_} xY} R\#)ܯ:RKMtO YӺRɔ}yDNhDje}`fNSLx{T҂cGr %yam!`Mϵt_-zNM?,_4x*Zpg!q_t~Ͷϡ W%11Ifu߹mN!6>wjL) 0:G%1!B%iNIU ¨X><X9GVC ̌(Z4ESPP%UL >Xy^LӶ;Myt.PGSF6]mI!鎓#k0&P'+\+TJ 5c4 {j2ɾŊ%G剿IgFrX'QcBP w X#?m1]5hHiAAZZ8z?8x1|'?a$QN{h-Pf( n wW(&}0셍p7M ~'5)M8Kմ|#>Wlr5gH{|"`œ5WB2Q1<\xG^brVϳj YvKܼ׏j|1Aާ1okZě Ywoh K6?4\Q)g?_/7ٴ`̸ JXDT$-vYGҿ8|\<6ÓX=6էb'Ux}ۂ)+"2Vlυ"'Z<{ #"RƧa MAt} Sue;*p@F Ε-Pnjq̰t5'^|]h&mVY.uE}0-@̻y߶6\tmka $}cp&RP9u*)iͅՒb-łp<='tEV~hU/m\ȣde)zF 60e6[uP:Ec.ɇ Ql+Ԃ67[0Y Atx=q5⩋&VH8O6Z$ *W4"3dkNCoyY}Ll ȑw>\r3Svf3\,K\ޣ.#WBP$QڅA -#qW|]kC :)JqjWF؃|{-8~ Ϟlhv=G bQs//xg ]S,X I> pEs ]Y'jXpk^Oo,>GmgR>iXG ,LNyۿ% ן"@CjVaa;v8p]!Qtm2y::c81wTpN6 a>MWNS\SyIVKk]!}  UKm f44@N{.g`os hA8(fxt[!+yޚv RJ}(rgIZ=iiǀeT4WyR] B= AcWoccG(##/ؕ$4#rLݭ,=\ hW_PY2h`= xu{%e By .UAbtp$flxM:МYl zmb"n?y/7.?5kA8M.iadd33^*$@9"z!{C(fե2ՍLT""TE$ʅ Yʬ&̮[`Cpȋ_ ^'uRJOqʳ#q#C4μXшks[-bp)5:V}Yހ75.r wCzCpQ\87')N>*0GkΝ`vJKV :i~IOWP Ɖd_3NzBV>+P&cԘs)o:CBsU ~*!ΥݔWyƅ1QJ*GaY ^UQk7B߳|MI3R%1#MwBBB9a# w c q:ۘ?uද4h((iw샃:h< + e: Qmg GDkU{G%yl{[xn?$#OICP|1n 8>fsܳys&2C {3=O[?5W>z)2]m?FVa^*-_ۇ ŷ]gox5 ħIߐ8F&bC6$]!JVQxG"v V6/`զ5?;4?CUzC6 OyyQz1ma[&߉s8ˡ~nDCJ*ִEkw"+_C*d)H]zhx#٤Ɣ;| kqT"}dKr̺Ŏ(wJVC`^ZocT6 sm`ōu2cp3ke1G .>(.ҝh4WdwΈI(5ֶ5IQє-W\8hWK%HH2"BesNp;Do>oA=CI)痞)}fs2ջ4o(LWQQԩ8ʀ식x;TEe7Oσ x`+ >.qtOf.Y*(D0H- rIe39P[ފ9Nv0Tŭ1+ldH?c٪1$Yss!%J!f0Z,qxr"yp7B s C`艬1ĉL:.6hGxÚ1Rf ktӕa|~#OACnN]r Y|}eh!ti@\]dDa3+.իD4#bҤm`L\`&L•~(ցXK@ʋj2AB[w0F} ΦK -!21kb@ ] mMKL5?˃.~Ai<-8+&z0wSIv$cݲ=Sw]o$?#c ua)ߗo0Xߐ63zdRi-p?%*|lRTHOd.z;D!Pj!ʨR']̥?q"<σ+M",ٶ@?]n\wQP^IN6[dؖ:)cZH 5C ya ic 6D1"6 i*Ll+8N1/XH{ÅD 'Z6-bx]K~ UPJU߫@;o?>)`\RDjUfVc#[^3l m0]q;HC#BRe E48Wus_znO,'v6zwٵ8O`+D6|$X">O8&ԲDҷQvmZPצ ΓQLH%=YFHSնpFGܵ9 RV/֦vJLJ1i$w'QɿW ~ØDXz  UD@=@J᧘( }h NVm1D&O)`wU'ӐA XwlM#&]VCiDJ[%oW%t aL`C'xYjYqruawcbHTWsġQϋdG,1.FEwŗOљY0c%>,}e/oSΨ9 Պp;OBKK^|6{L)uCoaP. JfuzXyV]<%r@zSx -P}~~vۣ{˼o?3?ýt K&\gյ b5.WY^mryq'\L&Ah$;(~:j 䎝dm{y.AW4:l $ )K) "*K!aszS*ULpFdf_ԕiWI+aΗY6*8-i/kĻMcQU%q L527wOoMt(ZFǙBv 4325f],R0`Ha\gҼDŮx ZHGgCv8jvH.t~׽dޯò[qܩX󻼶Ý)jqvTPyG&$ _߯yֈ*eQc4+7W6dMOneW0b|Q A_UR~p? v@6EsyuQXoP \-be0=pGwf&`iQ\zoejQ^^\`/!oye AU[4+=X)Y,mh0wɪ3r4w)ui Zλ cí]QJS0+y5Sy _<^^#? YUiJsͬo"?fzU}7vޣ`|S҂DbPs;!Z-!L0j1UVZ~ZyLD\QO`mSIsl^z.mfEYd1Gf Pg>j֖<[C{d.\vO`P#T!`XtJ9b=%c*L1=2˧BY5'ekAI1ClX\F]-Q)}0Ԓ{uYd`::2B`E)]? 0嘂E{&zg/7:q'F~d//)Pm&.*9ƪ֪#s-dDW  "f^M)bA0:h"^AQdV<wϘMe~H1lB,pI2M8L!{,+;|p"^㊄2oq(q2$ĠO>V46&_&s(Nkg$1=e$4 c&sɇvࣀbE aO WʢGfy9s.R8s)u4L9)V%ݜ,[GEcJl.lkX0\:HpV4̜cxkQywe+G#3G9>c~\M-'$7Z8_:a $z4V=n4[fZ5Ø$~Lݲ*ОBHp/&FLBdݗ2Tr?=ih|z'l lnTr%%6zzbg/"kdjID$R:ˆ(Xՙ)݉c7$4$6;˓l u}3O!Su&k ;MSAgʴ-L!MBX,DXE*"O3 b?bj-zDm`=m3 {Z#Qgw%ϭpU̍(8}wPjm@.4t dQcd W;E`N{zq6yDeyА%)])}6yOm`)磣p/HX!K7+46n /Lћ4Jؒ}xdٽrw; (=0 UVv_f,ER a ^B(` aLMQa܈*ĆMǓT_&!z;ݥksmПDIJNovȡ;}ʐC%&B&{E*eRI^^zԚ^mvԗBUuٟ>S eYΒ2pKTD8qjl0RL߂HtaA2kka;o[w*R3bPO9 TNvw=)< zgN-PM1m!_fwGso?5?ýx/;7.#V%,88ϪkoOp.,Ȝe^ufOAmN@n]5OpyW:h^:j軋y@wG/K9&rSt⧜GƟ)AҀ1?B8%Q$ B{z0?} {`dfߌFAj -fyp25[}W6xomI6˞W3KW-$!7>AP{]BWl_~u`ҏ1/2_͘U-#)mUgJ1~0CWdlq× g1pe ,3PW^# r7{?`iQ\zoˀ躪&K [<_V_D :)0NA$O1I.꾖[YƏUq_+W*sS~<~6i^Y]-hT]%In!>Ω.3Z7/ xU,b`5Q;>Dt `}PLB(B)1'HXvfX߈MA4_y|,_ s}~Ӎov^?!?A5CS2DsD^YǨ,1IGd2H$EėRIlBѸZc@ևBd <_W U3cY/ ۱WMk;% ;dMHp_]bg4OPALc`7KJܷD}'79rWSro A\M^zP'Kُx[amV>'t// lAQ]1H@d~a!+S0s2$d59@C73jcAX3~7M?,~~Xw"2_L%LH~28:# PΌ#QUmg=7Ri!\ .`u}M#78dp^-jqVTfW|!VUZW %}ur'}+QQ%<2cTrU[jQ{xd<nP ,rR\kyp- N 6DNuHmKKcBiZbNPRiyM77d:0` "뎫U:#38BpFk 炄x%H-= UndM5Qp1zݐJmgbK83 ZV G 9;rҳa1$cPAFCֆ&Nz Ms@q~fP?ǵd51 AtJ-)wܨ׊J`pp^YDV^h(jbkE1 |xkUF0j"|cό 7K"#1 ꐌ$>e#Z3螕14 ՜R+: ۷үu}0KjH)LR⼂Zɦҥ%w;kzvb[F% 7@`'3ܽ~/hq la Ht7]-\~Q|:jIr}jAjdȃ{e0Gfplֿ!}z5?{;; 9- 8sPP H}'p?d%qhsSQaC3  2e-jݡoߺ!;8]իgTD$x;9 >sIv5o.Ӏ Tqg[ R&.ء,w;@ {?;S-7.&OKH ޷P}# !Ge$~ʚuCb 椙Yo?E1HPN38b$~2V:zbwvpЬ$U}&Y|{$ LT 3,#廍;CT8.lMTO!9Y%J&uqQ!g2Ae=YtRMHT-pp#zB@S W)a vۼ"342ˮZ52G\2cYhѸ~VD['5JZ%.A6NFޤߴ3՜;2eo ʨ;q?lH/E]nN,q2V#F$"KS h;ʎs;r$ hO!a6LI \倌TpGfp$xb٤,t=3uCof 7~qV XǴ*uU ZC(j20m4mE'U04Fћb4L?eUI hUU u&~~X (Sqp؅0RYAVLۊ\^Kq=V ;IҎj𕽁+HP2q6˲ٓmFY:C!zUP-:>9ZJ2m= 6=<2#҃sB< \V G6pr\p7ʧFA\|!TpLKVWs뚺1ȼ39_Q x}D%wK;7mHȱ岃cN-rVۻ锬!l)*g&JaK6G5]iS0 _5A׀gn}|Zq"*Yׁ  zO.gb)8sCrDT9熧hwĸ TLgq9pw p=K*8nCVI h~?[.+6b0ȌL9r3[,~p^1퓗4eYYoxdG%(8D`q9C+ILqFMZ 00^{xd'ΝG*W!Y\T1M1һ0?]|MRsὪvAWVUC(U&h|maf N#1W;; ʆn BmPf]]F8vexe}b{xݵ2Al􂣚]̧vgGۨXѦ u,| чZ [xXQK/5\(ymvOӆGfp jm3A.̈na5`ĥ`#yቦ}m>vCn ͜+ Z JFbiԎ z@kc:kR^ʘV: 8qVDq$Щgk?9<^͌B)Sϯ~M73 6+Z(fftf@P-fW׉ N^9+`3]h?̸P ƱGv~^4(4u %1OtV@[? etxDUX'3NxPO#5Mo /Tzxdr۠4Ͳ|fTDʎŽW 9;T'1 vba`Pʢ51kڐgܤԱpǝ%i4IOʃڟLq 8G 7l˰zXc"ѐmW#CUe%~zt4B&ȁiJBc=6? xBժKskh8Z3iGuUE74buW,K1+Xzl YS-]YJ[a{sU;9lI U#(™p:[ߗ/K3\lGfpYE *>ܿ;aԔN7M΄K7 2j$8V&iLT! 2lxlGcF", uݩp[1Zϖ>`l#Kc o6efπ}ObUb@eֹjev# .Ť8YiorL'3?~k;<=҉J'uam4 \#͒uŸ5E9HlA% wZ\oB2=\9ɸE "QPOVj4L+!nf[[jo+Ѳ8;g [PE;OGP'h08$<ܩ4`sO+c;ŬYaxf϶>F w/Ƿ;ˡ֚c4 l?yUqo!̈_DhZ,y4,@*nJn|Dŵ(c:g\|8)[# )< &ѫH7[6b(./E 43Rz ,˰*Z]b !aCa1Vy^̛b#_]9Kvczbvj!ŶӲ^B]mʒ4K7p)8u7uť4s_y<\]NT6;b8}7 j)wݪbN7i_^mqķ(eC_nDV/oc]k=m_cX'm@T DK0($9&|~}گ:gzF&X7(_%_OZλR:X[jnbDw]P>}L]Sh'8s7w'-v &6H&;GŜ ŲDՁ5~O9WvĖ3vXB4k&$="".$(8a/Jm܊QPƎDr>{#^lO81;i1'U|)O>ŧ|SLD~>AAR*O-< )g.?$a )=օGPH!6uI%ᆸ?L!RtRiO )WJ$1 )ל%z@xqd)|_x [fSk{hGP6pc- io#(q\xD#($QiqHx,O Hg>cO-l8 ))A )<-DjӅ )Wƥ0#~Bxeꕏ#x niX'B&5?db(,Ffϩ'1 )g&_yDkCDxܫ.4p,aݧ!RZpu}HRs"*\SRhSRVUA[;tRJm`zV/%<{hɖ2B 9O̽Ւº1S SeF6;Bx'0tD"|/f9Y#*0]DUCV1 (S%yTgᔋѝuUдNR"8v1ijHإ fỌDeTo|&RFj)KnJ x{Uh-IA `قBH&Xx5M|,, +dݒ2B9"u$ԥ];Bx̒rhϡEsg4S @[b.#.FN,(5LAŒ{8ƍ۲绌BxJIjXYXq+#.T 9eڰ-w! G]\2BOm]Gn)ˍ0ߍy]FH!<;@@'( ;nt-%]FH" *`BF$ geVe݅T#}K]՗a4oΧTӷS4?hF/NOfE܏4canrI$_5k 312ʚR5eM/1w\Kv V0?w)#y:ݕ2أN*InHTfBe  'D%wEJ_D,άJȂZ1 ]e%sQROIT:Ek.-`(N[ $Zs/aIƸZ:-iB؀xncQŕ$8!B2nl a# !Җ$8E^rmx*ގfU>pF=. b(р*)D$y]Bz:lG+ajcqܶė"QfHPB/*e T MalG;BLQ-1uC|qJ~z*C1XiAe԰# !T7Lăr7+QS EOKI j#ojx|8n5Ǜ?޼ysr'&:ſן\]Nok[jzlUbYo/73;k b my3fu5!pՐz;  [h2᳭p_-=/Tu-g`0l@e?^/dM bokv U 1T):_G`^2N pX:O x,ԓyb>qbO"L=|?@]sml&ëuq_taf)}o0/oBĄ'S\<"Poנ ow[Y_Ol_lh1d-ߺhYlas|WK `lٕ7)8i|:m3N9ef!X;zYp(f2^eNj~Cةkn10?WQnoN,o6 fr G;|mCM؉@h-ΘSc&jlNxAPA2ɨg+ag6:.WXF^=+y=3r/q7~t_{/PL*0dw>.'|lZZ5{YW<2%Bnٵ"n۪ ;O!يaf `2IjrNm3yJϰn?J#1p|r B($0f4Ԏ9{:YP‚SN'W- ds#- "Ь06MۧhزHS-N 3kCb!FZ[+R0aӀ]7xV IdEz3B)7j 1<ցaXqK%J)HYJ?B"HY;vj8.*;6c`ېz8 9f%ڏqxILR#+"RW@d =]3"K!1 2,Y:Ǥa߱`hp8%bBݷ=`66#]cCb~xA"7W(yo_|qhTUTYOekOϐ#bx*I>k˗l:gWqlѯ(`b1l\] rp<~SO+2k3sXEѯL_ÃHwKww1PeZ΢C?ik 0b u8x~۽[u ^l>rm,Z4F`p#cv0~2A^ٕ+4{K=tÛ {&gR&7nIMyקr)6tN9‚;*zvN }I/#))ZrG7Wri!@Aʌ$fF̈2#BfDȌ! dF 2#BfDȌ!3"dF̈pp4a2p,˽r.˽B a:˘(˽2ɽroI#\vqso.˽ro#/Zp%ŠZ8 - e/Nz 2V,շ-LKWe>kzrv9r;q~*3tV, яYiLo?FC nmS,ʳ`rR@LjPyQƧm EoT ~Ulnd? &iZkn.;O~$v%”fԮ1jv-aWj?>&] %H~ à~GZ{_T^}.mS_t@;K1.G$R[`Z+Z+szEޯ.+D*Swը=< =R}s>nފfپ]F{fiSd3{v7ޭϵLSM|W/{vNk ^܋'Ho5@~ Fׄz\r=x3huߦ)8QjPWQ) -Myk,^fOgꌇg2cˌuQ~gl}嗚#O|9ex묙T); })aH>R4c獙SsbNZ2\RL1)2cKI "8e2\sb.Se^(V e b2l&̈́p6fL8Hap{p6îL8 g3l&̈́p6qDEy_MX+_XPh)UguN;ⲇl6yVwO5{gH}?Qu0D; SqJq0b$;K 㮴H ÌМ  wmFwg豁 ,ɲ"כTM%sINzdvhHE޳GA[uͭ:Wɢk dxu+:;kL }-`Т#"o^sd.zүXC*eYZpJ6 񥦨hV:%De/,Y:ǤaiL"k'%%\2NJ,K̙wT`6ud'GOOnEq c-zXsn lFP;Om߼J mtp^h#)bjqP:#>1&o$w HTY/_%PRقos~HD7u|K|V_/\ #@P̺ 4)aZ<{S--V`֋**͉̇Go×xFGwIdq(W]˳ɂF &/JXASLmEE-fxPKTw]FJ:uՐpQC~Dm9En˱Dz-Uz}WRa~ մbyR^?_N/µ-Vds#/І">ns4; @b/ku5bcQr4k"ɔr 2\[VCWck&bM*OjR;仇=RֻرdORiN垎v-v9ExwօXs66!v"tޝpXyTݗXscq=0hsٲnwlRk*neS _ :e/W~eQB-T݌.tS.wɯ ?r34*ܣXm0 ]Ź(*WG;`$ dKk9JW"(Kh ;5c6' |/8f&>tY[a<~KC%_q c^K|B<lL/މe4_\]l۬HJ!*Jq@EcVy7,)4z#|Xat_<ճASrpfC hhmA #w+ 'f᲌ͫ3]N5x'zy؇GhXAq8詧GR-=^@R{W2LKgMYj̘$ ]oGWr\FCqK660i I_Y~3CrH ɉ8Cj6`[⣧ꪯ豉hj4BZ" nyDՠ_fw8놔,"+v;4H.-?6?I}hz=l6m~s'vަSqRZ/m]}f?/VofwWrƗ)[n|rz(-v|w?z]wjln@Û"yl"ؿ47x66~ZVƉLLEf4H-2.8 1sN]r |I7[h`)lJa^iGjd~rVk/;D,} *+nS|Dzb*o$m!BVDQBDQ.QKEDQ.QKEDQ.Q4Ag8(JDQ.QKEDQ.QK}Eaý# =ܻwɹ00Ǭ_@zwx |:=Ql=N;[B7z.)CJf/h#V?ܬY\ۃo)dz^U-3O"+FG&h6J!^ؠLaL )&D&(ބktE)pI+ZJV&XEFk`pCm,_F!bҋdG9#D oI,nQ<ޟe?g<3xi/1=31x3$擧 k' i >(yå '3`OLj9CXp4銻ont~Sx?bE`*d$ֆ4RL TQߺn'~fsn5C w:n˧feJш^6@cF3zjܹ|'xHH:3Tz/%qSv^t^D#FBZ_W͎F5yJ߄iuA,!?ӷ+e~ȕY[EvJgq㡼2rBhLίL'CۢU,кш[SRq}^#F#z5J+eÉw_wOZ CyQ6KD ag?yUO}%XM>B䙭#!4^#fQZ˪fDstf!qfbyKr(<}% 1K:mN<9Gd$FC?n$V"X)o;L?r lu@._6+`~XSB cD"'D]z*hJ^9`x[,ҷANVz`)J~%wIMf*/cl_?LPe73O4 mv"~xd3h$q?yhR.z^܃[ZEɉ~]N4ľN9un~7+Ă+#h(ELD(q("V!ly)'KkI^wu;L.wSr>vReObyx[ש̰|f~hIp~6E6 {ՄTi0wKPCeK|$|~4n1;HA[Noa;?o'cQ 8%k,RaǭCaFeh ]5F㰃yqSg]Óqi00_' EXŗK>d~^hPJۆ>:-Lk|ꥒU!2Anw~*rIv3XN5R̷w?iRH57'-0b*54O +LgRo&*~W5ث%a\>̋rx?< 3XxCK鿺k_u1%ϙ]`/-(wЙ@y$e,4EV=dE=2!dWB.rt_#NKdRuSkO?>o/ͫFGF1JJIXB{Є.)ggpN(u2C3l P4t4D{ @Qq,z̝vX-"ro,]X0 ^4*?ܬsD[1JYս\\`0L=+&G(͐1t*ps,տr$GKjSh]-aX2#p`\z@{-`ǀ) 9RhRQh{Mx6 B 1 jP ;1B7jTr$`Oo&Wx:KŤ7[p@$_o( ^GfdV z#(wRl3n5*byj%bmW?=J[r;\U+ّYp G큦u;7={x,۴ H:wro)1x0 o:N-ńvT>]?}>ytqg;_Oʪm}US~ܿٞeϦ؞uެ\=@cU}Ks.Cuן9ZI}$?~ l׳2L%aerzy.Ô02L}HgzkWހkО+Of6M/17-Xb Ժ3ud떏z%nIٵwaBmfwKC Mɝ=AZ5ݥqaBem=^)ԛHp+/\qL=^G.N$)ZJlBȩ"An5x`a(؅TRa%v ^RPw{.T;Vqey\耸V+lL͞J=8Cl *X0B`D$49^!g(SVZǩc1˰р$0puhfC FR t|v!-s~"PV >b" ?b ,,e -o"DFۅ+ף) e&+S%Ao؍|37T!.nZ ``L(s`QBEM bB9}J02̾oiDj֜GqăL2ƃPVy&lr+F9ACBbr(I\ϟe\:W밬Uc7Gg.=P$L][oG+AFEGnlN6'W1o&)[ʯ!G␔q`G ZUuuu}zg;;G9hFY; ZDl hY:f)a;!~v& -e5>e~)oWܘa<&L%wӗ6D#I'V JFOF`7НȞqw`[Eϧys&{ܒM>Ƌ[N_B<̥A56}+D;3Y?|4 GQ࠶[B}̮֐nōԹ/>F/A/|sn|'^R\C1vhIhUb%&tL\%8G$EZdH^"/ׅ$hbYBxidw 7 !aQڇhBvƏc7,vrN#>WX٨t:nʅ9 %sG !;ipxIc1F1sɭ`5HIjCOr&[tE)3h5bN{aE`jH95:Dqx+'hjt$Ibm`G7Nkxв~cj9 A-}nZܖ>ms[ܖ>=KG;5>u-}nK-}nK 1ӫ&&\0̱|2v8Ԁ6$Qs-9˝R 3~ڿ]@GZ~T-5zN ƨy*RRqmR ˜Fs&<ϭQ,'QF噇׃G4@'C0P)D!ex0k5fLS{1h kNk,槎 R,NFBH]"!q8gVRmyt78Ǔ+Bw.LPrO5yIQ]:(_ڍڴq/xCS]Lo;;;]?oo/}vohcfQ ÛTZmxxywӫX5J^zs<͞n.X^M QZÅT,zK.hκ?{nӠ;EG6>iӾsk'ڛçZxoj[vbwα&Oy$}賴i0o0ӝy27F,ѹ&|oIwx -, Q3' qfA%*Ű[)rMq2ŷy\sn;T\zmc&GkQ(b\J(}Ԗx&0A ѵsYZ]+}rTk 4:_ϳڀ2^Ƞ >[BI TH'Kґ0хTM؀ akU4,r!c0S:$(O7rZPBXGm4aO\,:vF:FHH5pFBb%#1))DXҠ8KFbR&tijZiR޴@) lz./>v۞d_IX1(atkk"S+&zsΙ$1Ry g[2rHx{sIy'@t>ovW{6{2rXZ t[?{\/xs[yӣz;=,"g۴m:Ǖc%cV%}Mh #k"Tr"(Q)}K cݕ T$AtfPgy9bTxGDJNcRYU,*͸Jb,A#Ec~r7{hsE2F}2tJY:c # B+'cߵ~"aG+2qHyn*i`oVUL#[0xad(0h$m %"#FP ; ΄p)+z/eD$]g J1.?f;'_6nǝ~rv-GIX!p2H% ?f.P|~WxjveUʼnNkpQ뷙Sxe )B,;e15<@ ]%Xѱ-f8(Y%ILL׊]/QH%Nח-ع}s&md)(0 `Ʃ)xh\ $3w!T6䒾KШ/re,mp*|tfu?_8&h9W(J7 E㺈$8.^r>"\ ŭ80qx\"tu>}6(#:mqkB x!*B# dLE&Y/H0D%]$CS,<}۲^]mIw$}KSR8BrMV tRA-OjI|鬼3Cؔ#i>RS_NIm^()Ɨ"I˒v!ʰ_!̧MňU9ާHeг ?`{jy`|a)=Ms.Ֆ8m)h/yJaj-"L@fZk]Lp]r,%%+৕>MZO/4z1`b{ӏ]9T/vhD0` 3*jbHTzUՐjfYX>"6 F0b2~q6E^gs8uV:dUU}MT S왎`RArt/rOOЉl)Tf>s_`yӿ߼{us7LQwW> xR?_5MU ͪթ7רW&\Q b .@Z(wK!׳{)!Y,VQ$7C2װ@A{:L1O!3J)2 OU*ϭG$’RL3VIS!EEN2ì6KԫTzՎ힞]ԯb6`,^0sa00"ɸQz*,WVQD~6OoL&V.}9IxO,lTla'' UFu:En$mi^xfz@t.9zAt.r\VJX/S}6x7VB/>2Q t GYo'ЄE|ܩzB GQ0׹298Wd`2YM]jAROx?'R~oڳW'XՉOb$g'~*N]ǻfT9ի =O`k7AKjOR-Z.,I"v5\VRTޖHK.Ir9zrv9֛:u8Ml~&#yT\6;OBt~>.n_{*Yt=YA^5|Wo(]wQ6w޷kx0y=_s;:T۹|-r*ԆVՋmNpHkeR.7~6B䌂*gRbrSܷ`ktÎhzA%P1 㞱Șeh0ܐ`5E9{8$(!ʐhD`"{&CD1ahp8%Zd"pc1g; M:A@c}6q4x]YLKT8_OMK@s[=xr1ob*QR"sPhG)2c*Fx,B$<`M(N7˺pN4ڹYA+ۛ ߜjmoǗńtt~ b0F9(c&91 Qc`AW#}"vjjK<4H-њRB8AD&Aka(NJR^J 9Nk#R1΍<[֑ahNGkZ 4Fl91"X|B ZLG[b Ec# Th"PAmT.SBAL9Cec6U&nPx0\j3Bn"] ':Y9u\Ȥ|] Xg/kL[3oA|]DicZ`F&f',S# 3 b)s||!6vE6{eS@"[86 A}:B@ 6vkx8]Y|v|4RA ˽(r"nC:p?6bs۸BDݽ4_b%]8m&_ub\4C4хD9(CʟhɵweWh =E0@:=k5'𶵣WCŃT<9뗼=,]k:L{NĈv!Ă2SKj9VHt kk6% ^u s^vlz%?Sd1z;SD QE[$0D$52B1ȸo5EThR /#H*~t\W|:7]u Xp|ٸ\]$(TP~>2!$sb6 JJ4R(ךZ Vm㗺ǡ☘w7_`!}>6g}·QǺJ{Hu!He-4vMp(yBf~DrQ[$\=;^c:hn"c> [CSm(1 %dAth8NgSu8aj|8xv@1.f8Ӆn=::R3D,饒9G[nHO%27ҵjK=15$Ùڄs*+!UXA,EΊյPփޤ04ꪼyxiY)[\t0 $B[! YBkVٲy1eH@}W@-!ym0Uؽ{̫@),ލ 'LoijDvZXvGM0bzBoymƛ ՠ)h8_4`O/qFqS(NTˊփިpTf2_ߨ(\z)H K7 `*G=B~X i@,=*& !K ]jl4GL0V3d[!ցrH|2jUv+v|%ȵEmΠDuS"oQeKµ%8g+grέ!9x`) L`(BNj \@9^{y|O7͔1<ϏyS?2=Z.^vIgN o?t׿`yo/UcO3Z {"'\j.{.o9 [HpXD\DcehƜ2CIhυH.<=C}#Mr։~ s`i7z^B" j}r2} g8Ĭ<.~8_~,Ѧ`j3SY/Ah:"G4EQ̜L}y < oRחu,mL ]XWL@Q "DG#q` rZZn Y*ш~;z~8],tˌYU1նa*}_v9 Dw3&]C^RV'Rh? {)o)<7(g0uD&!S+.+eںls9O\Rf×7e/Fu^%%J=2 '$0+=WJ% bb9^zL`S2 2L b u:4b9*P|C 囨 #Z2%Pƚ*:^]zВkeVv\rX39ddXoCwЌLMtbI’uThcAke_DX␐!@ۈHwS3% ka6H-ְ<Ja)%7vekO&+-F_!B؝ #-Iv1 CRጷ'|t4x?ˋѻ,AewMMS:-'֢^ޞǣѡtZ:B'Ms&6z:NH4r前⢉mg3t%B[fOvr o1©<󰝖ՕfzNKŴky\+lei(ZcBZ$%k-.w*ΈmyVz@p A5t%bZ :wP{cwY&ȖdY Ѐ4$n9-4n"iOEr;ug\+:iIm'_u9ϙ2ˬ>A_.wlPl2⫽ބ0'C]pyc+zH7/ogzi%z.o-]O>=٨~`a&u'?6kU˳ԫkzz*1i7}se&^><9̊Ȍ9gzl.+[ 7^d[aFѲQ.^U{+r{)u)&\Oi ksc|Č\;;1w$UfMCa8;lg~1`ՅZƻA]WCqdxP2֖f'QkzYDxwzCA'P:Cs[(qR 7epK.v: oW%GP.]Q3| VgL8|@ e(!*֝oboavjkfQu]mw6RqCo5unśT"9XPIH,9P$ g|*h4Da<"6FB) !9mqYQ)ߏԲ@}jхSv]YˮBkp뷙Cegӛу9W _>L:MŇ-@G~;'yIEY^$iY߇]2gOok8|Ym/(VzSSi3ˠg+h{(o=zk <}+6)R"چMs.muXO |eig: +$]&g]|~jr58L .!\pP Jg?}7&.%%J1Ū9WfHK? @,:dQ)8?yfV#FaIsyt*If>DBB.3s_/1Dvn[,C =sa #ER`;)Ӫd җOv)WV-lX:fR<Ώ x]t:t^+GnVFs&wT}Ɍk5)|G;nŗ5At Zfw4]7=M6I3 *FLB/~PSs1afyt"K`PK͵<;i6򮵼x).pK{Owg~~AUh؛pS᭒2fi[sќMhxG`Ozeu3[ֲon|2~@WطB66+3Jhn cru.gw_H=#tX=p5N L@/clI3 $^x|#}}Xx2_E++$t5ڭF8 3MQk1jY[GSyw%DCr3XFk+ qfA%*Ű[)rMq2vI5z|J 5%JFHq|: rK%bdF/f"ha#,v="<6M~ߛL;PhL*|4 j-$* %^F@HHLBDn5WlCفn4,r!c0S:$(O7rڐ!c}-iLbtӱ/HǴ(8XG!1z ",iP%LR#1s:ք-;M4Ӵ655h`Q<74z ngv4)g@➛T<@44sup+PN)93Zl)5aLvv>ծ^FA);wB0cؿ ?1F: lp6WjRz\TknqU707W:dsJ۾/l Y3l ϥ)wTe2%+]=W7O{i2-maLƜ>kJՠ mi=q9GZK^[C!δUvbi\I:q9fOŷx8- P}D3D);'WK~"A5LMnn"iV2 0w?.=Jg(x}nx~>J^иhPY smE $3!&&wLc7ց!}>}`JNNb--$1z.&|?tpܐ*O 2om+<~_i@X_nADYf>os^VE;XTٗ|&Lߍm:;9%xiq˷,5~,KKtӑ~ QnVyGM|F `ܾ-נl PK 3M۟HjIfY13A`6jn娙BY[kQ3e7j8fh 򿮯ōxG}?nq )KyYȡ=#fn5:º=o/Q[P ?jf섷(X?M r?'e3u''^rÜ9Q [cI"oCoO˹GҶ;| X!f@R!R898,UA"ZJ($Y=Q?CO]R2Y]Y"VMBHz̝l;EV0E>nO4KW'^.m ns$[;*\iϐ{X`!Z$萒 z8j7#FzmԔ.l20cujr8|M>F#m-k'U){o3buҮbLP|c$-̺zB17M}KdM6q]w黚?H)6gUpٖSnp[&Ok-H=)ksVL9֔"Ri*<rZA [F5_-h)WE( p5b=r1z-9AMҸkcXTx%BB GQ΀9sS q{$# #(8 ARMzAUDiEapHT;橰J)^L"E1r qpBƣڤ/rꂗ OC*JAp/\D[$aQQDT>w_mvC 0}΂cLFD« yiLV2ž߆igEq/Y0wia0+Xd2lqX!hd3Y@ χYeLq%T8  F"sPh z#<!0SFJ%ԝU)v$k..?]:cT읙7.$报mfsRl\ciN9H3d+J"xM(5 8 QHyoRni( p88mӺHH!McƆ#X/-c55qǻ+l45ʧ[Dnt],J)g,^772*ࣃ1yu8ώ3,YPׯ+uBuվ̉?_> PT(sVA-m=K,!% 3Bs46Q"UB5I]>7L%ßGS@!xUg5*<.N3}ny &<׃6!E |Yn^ 8z9yV<)zi$-`Q(|W" V* p* t"O3"|lH3_DMO4V tZ|1_g'c }U[Z NuV:I[B'ĸ7fj\R*X nΖ;awT ?`}}]A x- ڥ@mB Ùo;km#',Ok~:yx)y;~+b94iwVq>7wEځⷄTnt1؅ʂAt* 62( F刊`) NPL2]Fs>Lɱ|vP*k={HFg ưC7[V[-H &c(v`i/9v3|%gS4akꎩC_{c/A-S0&:X9 Ja^iGjd~r@@SJx->XloLU=LocS0V⻱,]&yR#_t$ |_qqy@ҏ }*E~6ssӫ___Ǔq|iٔȺVRz/(IO Jfu ڒjlxkjtQ*!Ց L>ĿAΕe.E`c1&f6([8 )&=k2: \+.rMMXY=R*"0AY` 8h ΑSd98 T7g@#* )*::51B X|T^+n8u>_Šc128sH[G‰uXTއǩ eUp;oBT1qV{tH=JZ+IPB5R;G5Cf DŽd0kfz.!\(֢G VpQ[M*Ffd jL`&vkE9ܵ#w-g6KK ,Ǚ+C7 7~4|'* |4>]` %I.P!`V0/2ZDJGt`ZD"rRu! ^ZqXBg `*(S:$0O7rt-0!|`#6Hɱg.z{G<FHH5pFBb%#116A`Ir,aK ]XӲ紞zN;$5)RO@zM!47jUi>ȾĂO OZ xPls.`Z UĘezщW"G$B؄ A4DQc(Rk JzqyMgl%-.L. ;TC$T&wad6(es縒sPbV/5,`lpAJ$imTo0mw$knt7K%juL1Xn)TD[(`yfi%1#$d+ǂpaf=dw,fe̺ LES#!)1 %$X82.=]Oova$쨐s=C#*0; R% 9A*< k f[FjopX: Ef402sDdQ1Db'|߰r|}w!ٯK P]T݇?8s7>U~%wp'g҅>ԬYBkp1333.7+\1`,?^tsWh#<_@@]-6 r;)iY%I QX6+zuG!1[h#pnPM>Agt40 55Jgvk\+y҅z@Õ`^G]~×o.d f\eWQ6w]]EWҘqp 2W(J7mE:$,.c/9SD.n@ ],lJ"t^>U@rD-6`MHtHq iD 6r辔%I0Dx`"SP"DϷPE=NWWmD<ooTwm8BrMV@ ) ,ͯF#Xtߘ[ t4~#NFCrL:fEA_񮲟)zkJo{cM4p:im[uZ^tk\_ ,%Ib.ɷYrіD\LO |L]"«Ȁm"qպL\[g\Z)˓ieMel<kH,< q4\/ꧽF? QrsGaPH Ƒߟ>|?`|ߏ_>?8)U`݇渫ajh.C.W]n }>`0=… ҭQ9u8s)cU+;4'9 ar3$q ֥6)aqjx=0+'!ˀ@gܫU[#H%!f"!9N9 0WkU{kt%.´]'F!앷9Pb!Ǟ 0`Iƍ"Sa)zeuI@S>ۙMhg+,gY>))eE%GLx] 5&-:%n$}q^9&З [gϹa4g'ˉ` ^v8v|IĒx_cop syY6<4 XQGb0quE8 jȘS̓Gtv;'c^86Ex}SXuwﺻ L&:EU4$ @W5i.&^ءZVحAn>SuoݶmﳞwQ1ֲIK%J߁wmt=ηtMY\2G4WMwu6ݵ\FDo~8'lsFG5F^xfUq2(ikT+R3Ey:ӟdoo0w}ա..FBZ~%-/٥X9:5*Cz Y@SJxGmo.dXc*7EuȖ;?ݍ|!mRn' _q6vI%ރFH) VﮫWx7'hC|v/vWd`Ҳ$CEh4=T-%}5 VʶGZAZ2X }Q> GA(H }Q> GAfK8mnd4BA܎1ȷBv`6+3Jhn cr&Lv|&Q-,1Wž"a0Ma  L?QrfakͫArV_E/kX#=oxY-VqU|O;e^?|sBQ`O4k-F59%cM)B)kdiijXSTޕ$RK1[Rއ~p= g58H5IV/odŻ"Y)lX+"/#9N?`.0 z# T)Jk8Xsh }f6wwl.{3d$Gkl;hRp).}ԖxsU,vp4D8O⺛Ú1ЍM G (lU4 OP fh/#at$LEt!"|%~6`m@># \H(3 SD6LoXGpl4a+AVZ{F:F): ĤH6, A9p0IX:Vm5մVӎiOZÎ 2DP]9 v[i sLxԅd/Wom`slΐb&$'; e,<:cGh>>yty=KM7_'k\(  _0} yq F7>bʆ:*y vς@ rH# be}HNJMVAA #(H8H7FyX'Kvl9)0tݨ,|.m|<-T<[Ϳϳ=Ge޴zsz%T8 ySKI UDpC@K,s$#MMZ[YVe 4_MuV~LEФ }OK^no [<,wk4UV1OKuzuybj⑕߭Ģ,/.L"]wjbIKF6 ;v{˴jۄK:=) x0ɕZsrK!Odi9~&^aD@00C›#7a|oI2X:Gi)wΗ_3'ܤ $U P@5KӗA_Oy'oLXL&55YvJsujDE_ȕkhT_Z,P;Sކ' F<}fbrOG Ki)x3| {&**e9W3]\~E|N9$Zj#=c70Ǐǫ^ 3K>`sUvz8=>pE*VJ£+8WJđ[mSN QEQV/'89"a&L!"ZFA <4\KX䭗Gg j95٦S9?Slt\K+?pUrobCaVL=,WM:Ǖc*|Z+kXpA1J-X@|Nu)ڒAB3gsHFǤ"9gQiƝVc  ( 4@2AjY4nW}, :R:-S)+CV:6qLGX,s 08( #(JHJz!pd]h$?|VkiHQ!-J{ F3R`8vbJXFdD(5AUf"GÅ> nzٲ YNϿzF\.-N׏C-G|o.RD:t 0`I$f %e^J1a!I{ho+SkEcL]`@ N!\ܡk)`A v.#0ɳ%P0 $+:mqkBJ˥CR՘is H}*ⲽOIBo8!3!OoիG9L sӑ WTY]8QM6Zqr >͠T!9L~4)0#Z1~kf<8[m T|bs}3[gc;(Q뽽7^ݘtpC&DeM鶪RY ;PDzFQL+y}(z8sm蜐V6Zv֪J! B(u$UA|!د%Ybॊ / /Ə3׹q>7w?~z'Lԧ>5:?K#W6A܄vHVϫ温jXy xzjs\87][ c3tigLGvҦl@W2=+lP[N;>eFU]jB„(WqzpD}>"m缌1f)^aXџ6w[OUc?Ff gAFS#GJQ&/+3߿Zo3Ik2Aq$\@KeVѸl*p4S*GTb5 >]E7xVW1ċj-r k6䉋G %RIإ],]T޷:vF:Bٞ<)$k֔K>nul PQz4R¤\42u3*gRbrSܷCx%sTtP(ǵ,q2eylE,FA9 J2$lAɛMӻQy@(FI)R9 B(WSa)c{ ҴwBwQf o & 2.OW:hf!DMׂLN)EGt=\L4Y&TJlu&&m}&o7 v?a$b,ʘIN CyXn9 )NtacSy*v`l}o#J {d y- X=V^֔lr(r^QmGs+:2 ͉r&KHRk4lgc,3eBGy9N|vg|Y@ǤUђND⒡s7cm:6У&ǃ u!_Հ*aP 7CE3C%ii*R楸\ 7hs~=MXi1F1sɭ`)$ŤEӞVF9cYƸpI@k)^Xj(T;Z7T[2H$"8D(+Nb3ȧ6^-x}nӶ>6NfNnPfآ#J1;^=?{Wȍ쿊0lmއ";M$y ,v`#K$yxպdd'ml]UU jQkm^|YA:mт!.Cܲ:KqJk'^2o7=sBPKA"c<V頼VY`"pP"E1rK9R+؜GY9i J$ -a&PfT4lrD`p's g3YSql3bOF:ܠM6RFh5#i3?=V٩[UOTvЅm:MdKp87k*?ReSsTFQ7lh!Q FL\R=(;>(&^ws-ZqW!0^ :˝Mg8WM=P(0J\0y& BݽǼSPtwn(ˠ32}oS:wzx3e /KtKg'jX`D陰}Y(0*O^X |}§ &R^Ô hn7r~[N%O/&WQιɆU_TKr J\:~q/l؁En]̟u~{3j*rbXtkVj߿.+0D&3w`x_1+-+f`tJK֪}ɇ}ɋ'7UɝAf*{R2Q/W￞YjV4 >SB - 蓘V{ә;yFsJ&4ZSTNG$r)E() ϼFF{ mqbn &385$`p;]9F(կʴczdKo߿nymS}}^7oޛ^c!R I^(fBMyg%#KDUᣏ&8V`P@jb N;w XrC0Q#DK ٛŜqj_>%j6+; 9ٍ,uXNT{ڿ0Mn nnmrfNUTrw)zm?qm̓v˩p֑>+z;GEzw2~I֩gUߩHFǤ"YTqXG‚'Llp{*?p2D|ݱc362qLGX,s 8(FPb!@B]h(:\[6rg`=s@c!ZmDFNpPJZ R-#e'W%OX}Y?NZ\}gKʼw]+S AKd K%0 I:_ʼn )5J05.|}l0o\+̐TGSEcgILQΰe SDnٕ!QwfFpc{g1Z[  ʊN[lB\:$UX=\?t$e/3I`,>NW私ORB%r)VW~(/|{7|n= Tw׽֖?ߍ˼MjxCI~McHch~<|F?(e`ZŴzGgvo5UO%hSI6=iRH=9tjI@> Tw#_, w#9*Jt*&~:׽r˯?}K??|{>\b.]/R!ȽϽGqGGSŶy49ɂo\rNṏG⹷%q,ZG!앷9 !Ǟ 0`Iƍ"6ɢ!TC&goTH6X0"N+(㟂$ZQԉ7[-M:[qC*Yq vF & w1aqujj]T3+o?۸d7|Qϳ9ZT oA~(8+ RݺIY,3zrJDt}[b:ӕt(+Jinn~$ݜKEu mUٷ)Ӫ^NTEtjsڱsXu@y aQ :˯ʁ Z+^f&ǫyq0 nљ屼':sECR0 %(qCvYI(J1,,VJ\fsblfp5LWneS)Cgs.n)g)|E*JF(JR!8>jK ^Ȍ ^`A hnD.Z򖻞tj%XCfVȍ{ů|uGV2*SzD%$@tY$) Ӂi]-gM,hwWa iU3ذb |@׆ڰL198A屖^iQ 8XG!1&u4(&9bc &ZNk9퐜9LEo=իbݙ{lq$g Pqe@wݨ.1  k% -VRj f(HSzZ˜l.V8\lh!6镄4kS鍇 %lcJ#68R\_.P Zy$R(;,%nQ%K>Th`Αބǃ Rq ) MN9BPq3R'Pz|8 ڠQ| ܉Q"1Kj.g _(+TȊ3u4DãQl=N;[B7zf!% 3Bs4fƙنgN+Hm2|]^Sa?`]m熪}Bg_03ZJ$DUZ!Yeby )Ҿ" S8۩d 4h S$;/!$LDf{jcRȑ^ cMP0Tgz8_! Ύ?xٓ N/1J\S$CR_C!))Rr;$鞞ꯪiㅑ2ǩ_P q`:-RY. ̸3pHdk;OvMOi~I„(ڈ%zO@+RA 'R9Yʔ?+w\RÁT!V\L8S=@R*RzUqMTHH=bB]%P RYU4.1CXbU0"lH%=c&D{,<S`*)c{#ZDM4EģQQ Jk.*”5J@־k^֜c3O$Zhy\zѷlg`рmP VS 澜i?&e` >Z eJk8@2  HBnHYf;$(!ʐhDYXH>n1E4^J 9bi1( rfn`:q4x!q-Y=K>MzW/绯5y1[*J mtAhȜE(WS`a)c<`hy#<!0om m6@Z-6ae)cg]Jۀo}s"#J@:h5D=DWo[Ke!% *0UU s])ba,#D{QHFP\T-YreI%"J+J6kF,B0#" be}0z)#"60+Co#ҙcUY*0ޝS.>Gz8~n~VM.M3O|en'6 IK<]T|: Eǎ%pZ[eU kA47` rKA9R+؜Gy4z7Z%BL̨3h F刊`) NPB2r goY/@/al;l;>9ۣxfbZͭݮ(,K'us.`9VMF^`_j/9vVR1|%gS4aetuFJɶRmgɶuVjy3ֲHuHxj_}2J9X WXJ@LJFlPqxJ4iQZ1(uk (h5bR! ȨV 1yi-qN<=H(p*+@Tа~bΌ$·&b1LFv;cr.aմss,DHilA/QCjX#*-Ǹ'qC ,сVTbrHxo!VUPҋX_XϞ= -=/ VQRRƌƁZ 1b"+j$xjb{wC__?/\0kmw1 a$qez0/aT^_gu^rÜQV [YcI"! n9ޯQ#c X!f 1!R898A:mтR?KU-ZoiWcXn-1oўy,DЇ#ql*I9P6ʦBT( eSl*3L(P6Y(Jwl*MP6ʦBxW Y( eSl*MP6ʦBT( eSl*t2P6ʦBTdP6ʦq%BZʦ&RʦBT( eSl*MP6ʦo`$Dʦ[( eSl*MP6ʦBT(eye e dRؔ`a`Do_@R [u0z$p:17 +sCW]++2q`b gN`K^ ^0F2-b8 Umj ղmH& e˫i#3N}77}1v`w' d&n^ Omu%Rax AF=iR;ɏ:`o?b)r1+gWS Y?+gП1fBVT+gПBVWBV Y?+gПD'BV Y?+g>+|j,~F*~-"؊!+Lp*#c3Ou#`WHҸ^\]%SڛK hw8 }'u=A_-7xx_CvPlv{})i~Yv`ZsATYeEnܜCE@VR(E y 8]^ɀZhAij>lR˧ E0`N0W,"-rĂU≅bkX&굼1,}wNJw A[jnܵ ?`v:.thA5NI˭$M--9Oo~>OT0# "2c{ˈZ~uhrll4<\[nƶ2AMFA^_ bJOחHcmh7֗x 64褠Cbi7}*n3CQrp<8>QS>h"` Ĭ*%=&DI5{#r&T_nb$bKT=Qиw{r}R"ȟ)(b;T?p1lv9F"r1DWRD݅Í]'o.BOWgJ`! H ˥CR՘pLH]6}/* ɔJ|/N?|=u?Ԋ%ٷW߼y=.i0W8%]ZAo5~T[na-$.TrYug<emn'`~on?Wӹ~nENꒀM>j!{i놴v#D{7A`,|&z|g;rsB^ &zmW)W9tf2:|5լcc_< kz .(rQkź^r|?)?w?}D]?]o0R!GG =fk]_\]Cbtsu3{+P[nwM|$^xʵAu J?v~q7;Oҡ Vq7g_ar:q%QkeT !~/*D8 kROmaptj#=O ?"|h/a~6~=UYg$CdY `h©ab 6 V@F FȶS bJ6'!3xDJUZYle˜뀜`d?ɎϜwLk<.,B=H$X;,?d_ʢy`2֓KQ۪'s1Creuqpy"^h&4Z8k )ks4J)ǚR^Sho ՝h,@(`(GEo_> ̂HB9UaaR"2 gQѪ߳^~%4>gEU++P'Gk)*Bp).}Ԗxܘ?\lEPwԜ&XKE{zqwc7'ڠ2K;0eW4'ZK(Iz yI=R:"[qX1ߋ( ȅ4*lL1h L>@I kCJCDKiL [s+2dL+R: $H6s, A9p0IXkq1?{֑d /{~Uw|8I'`0OkRHɏ,o5_(^.EnEvU:]U%JZ}Jڳ=E\g>l]PVŦ{%[l9bPbߝ iaeq3]ueH;BLIw4-H]Q=}nxqS+}_c{Ɣa^WC4 }ZBmւlվVu-7 5;<0M0PR_~/{ChZxŋ ?勤!1 *5OvzYYO~ U#M R2%E!d{ALCBu2JG;GM!?*8op䨸FVq- ,&5i Yd ShOƍ`:Ъ"KޢIh@ ,(m$2f@hQSowS b\nC2f.Jˢeʠ'%j NF2, 1FH:=!O 11)y7@?+dCpv0A+3q(OLA!$#&cLХ;zQfcpa I)J TG!5뤴 cVtKݗw_UHGq :)\GsrMW#2)}̪\\,e[us&aLFAr7tDbhF9,o`gCV 6g>%ANn *o,DI7 +Jwֳw3wI򱘂&뭳Zv[*Wp!% Ke_nYeM} EĐ Yt;{V1cq6ʼ"wژٱGZ Vs 裬|_..0UxVޟe4Tv[F& W0 Hru#͏"W73it6KcJz/ %QE)*NP˨0GUʒ7V*3iS-Z%AB2#κew]|Tr0aO{qj]7C#!6|zR'=t6큾MrxJ%! =&u\hyf,g$z9:`9Ytr,G&} 0'Ǎ*:κsY_tݹg+__q-4)jYȲV}ÍrZiTPrF RZ\j5`/A߷| Fmfy_OVͫ'k/K{qy˪T9JFeCʤ"YPXn1R )x* Ev WYi? -Ν]۵(EWzGpԬ ~hC30\\Nw5Y sŲ"W*O,7*w퍩eIo{u픭N=hK6gkؾ]"ӳނ>Z܊ jҬ^,#SDZk\)k},ɽrS yE=wiFsxZ&hI[7kv;cadhly˴L[ˆ:^1ĆmƊ(y~~DtR@ <6zʚO{?h?ylF $ {0Wȓ zzDt,j1sz㑘i'm w)q!X Pң ;f#u}4hYTCκ%7xΏ>B['J3?FVuv 7!Y{hөXn]滸6yML<ȑ֢w*q|KBKE(t>#sINo4ap=.W6ٜ10dʠ0*lD^e Eù!H 0JoE*E , KPI%ZXYSp" 3U`Q29g#=02!<<+!?< u?ִ hb3**pC'mdbv~p1Ş lj]ܻtE2%fh/O_.УVq@^&_74Pd?bh5%) enivuozyq3ˉO~|WngJs3ݧ0gӹfnEԆ^_M*:w;XKK֖o}[3ۛY^,Ic^Lt`Ų_ =iapژV|mhcoTθ\FJ@oPn}=&{#~Oxb}(ņjZOoN?9-ǿ?߿Ͽ_߾뫷v \P |v< |Tc:p鿿uw47oZڦirԳ ߢ]ͺrKw<1haNH7_?}??ǡ >#O :d *z{q6\">JB^mW_x'* C?hQ[\16HCUVTsst&7-py6`;ocdg %C`Y?r_...J=G_Q^ iHSQYK|W!C= oc^cIXs3?Cr<W0DV3 &uqgVʠ j |𭌰SC|C}w6) Һ:KWͧhE'+5RJ(guc]M!VPѽ]f`&#ZfgA-a1QG JΣY6 s cpi֡Elvnrz':ާK躲A~! 9n&y黫}ۣ>߇-}d{Dvwj1WT  }k M[ ZNVf[a#~}kf7{Rkf9f@D`:$ %Gaiz)2, K)F8oSrg 1,2HgH&J*Bf=7L](P"c܄1V;")3:ìk"޺T݁rR_--z로}0E J&dlќ C% ^{]ܥG=⑛`IxTNAT*+wvD"U>Dv:tcn89wkDD;OyBfΗS+yyڟ}:ۧk9trVYg\opv6kP;KsJRϝ[:}Yt4l{1tQy;f֙sWqɨ˫[]^VG}"ǹ"/R^:TMKw̵j݊ѫUևk9n_ES~\cz.4C_ȹ/U=|q_?cef0fW'g M7X-V}v#+/z/gurq AH*gq yN;0..ӕM{}`Mq.;Yk.VY}trysL損ΖIYh,%ڛ1 kGv?xVf*YSJcTuyŦ> =/vhJ_Ὓ֤ӟ]oZnfbE޻z%;6B߯_Zծ n}wqzsqyv|u[\ϽXL}Q)_}7.ޞ\]˶xM%w{]w*!W}3_,~;#gbnɴuݠ397w} iKLzy!*zӴ_qoglw!x8ߝjή]Tz2\yōp&[oW1wݩ)w-w&=w8 ۋ,k8⶟nڹe.nȩؼg)P!5 cWV04(|p0a=i{71M̓wt,=|=#WqnI=GZCqtmt3Et]ו5Yuն{bc{?h}Yү'Z<:n\YW%(vJdM EmtVddӍR=xJHtVTF\U)!C!Lm 6FP3O[IhKQ+{;oS dK茤 v4GDŇbhhS:r=yתNDc=oyb3:Ld urWj1 ײp=|.' ft0kx3$ƖjhjzL-x~&3׳ŽMܲnmS1J{4xwIYg,4x2R Fa0!>*16F  *tQvN!yx!qx wB{`~^ӗwoYsզ |:xyAL%R'  {7'M<i>FSfjsHdta]OF8gP =(aL! #N ;`X{Պ~JmIw'mAJ<._!8.#FQ적o(ڽX1z,Aⶦ3*Z]lJvlp 䁈:t K)+ E[L4f@aGyB4ΈO@Gm66f0ʰ*+B A1*'jjUZ eJ?-j!eќ5MVUF#42Ե&~TG+ UDXc7 i߬n$l& [T_j6ZY5ވS˴`]Ϗzr_\ϕI&>JYD0u hF^#Yhl)ms6;MwgU !͢qѬwGŤͨyHY#h4v˨ 4`ˌ JzV1>4a,`gr5wp 8S u@gj*҄gOt4JyxHuoެ0q4'+Ik׺)n ẟ,\mg䍡UA .~Ba E5F֕u#<Mg U4CXڨbjsǤz54 ڊ[VjPc͚ Rת%_63vT VQj},ޣ~B%=ݲjyhdxި!@hvVքJ!- <PbٹՌkBE9O Xx;ע%蕲 0H56Kxwz)n99wM~Zzjv0 v k%>z[~iQmwrXuYfcQ;l} Y6z<y0W$y?;4o6_$xkם5]߻ \S^onix[wTNrm'  z7fq+XV#nnzQp+Jp+Jp+Jp+Jp+Jp+Jp+Jp+Jp+Jp+Jp+Jp+J[%^?̵nop+XKV+>C@i'V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ np+=­`bsC܊MVTsĭ w[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V@WxRyNbt+6lí ?{$+[-V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n%V[ n5#n!@wE8W]K0l}nXTvG66F^sa `<ø`eø> :콮NO vsv K{sv [SW>p]`{zG!bf _*?/g'돢~o}7С_~A//^nT}qZYD1qV'賃{[eQ=jZskK)F-/N_G R/|s8ǛL}xne|zr=s:J 7fvz ]W9bqmJm@'q .Aj&0,Um5F'eYDǠ! ֖/SncɈ_ zQgvzFw~ٵKWUZ* Ù'ҝ)LJg {h*9ۛJ~_*9ڧ^ɱ;'I%Trldž+6.jlW@O=\J,ULCu]oq??nsS'O^/wVu`\t5/]Tet/5T zkz}?|/2)|qVK-4ՃO4FdRAg*Qi=IU&*n :Qv0O']A٦lᵫ@g6"@y:iA:E|cc.%ONMr<\B. píWa kh$Q0ȧPOC)88F"Y),a \7]駣iw{|0棼 *&{(^]f& : '(M516gN1iJR4y{ Kڈ23P s>}6th= a纮~dW;fkBS|N+tK)B"LU56ũO긓\M^`wajnf+ye89|F$fZj!= ƒEz1"?M6̋ҍgC<ϓ{c_bǥJVpT00=#P$[+aTW^ oJ/eq> -VzzN+z 5J-'Z-r G)Wq"?@4Z[mST4X[)Ae\T-+0Χ?]1KO{brSR.l]1"zvJ%X^A3Inulg.<C}AӝÇ[1]'3ϳn&Jr +_\i/g>,ܳnOvOLײ2yw>zm2ǕL ̿@5-Ȧ4*yx`Lk%`#-Xwܩl X20ni.D3VN-mtj`LVoﭥQn#m^1|f|w? ubrJ^U2ȧNGWly(U !?wKCeXu鲍FZE2&cJk'OW,u̖T6qQz;Ig`P=]Mc] hGJQ&ntOLq8 3Ưۨ*ϭG$’R\Y*-y_9kZŽQޑ~JQ=$!|< uxNw5jY-ߙu7{B@WZ0"REm0{. {8'|Qz*,W:9*@mܷ'V#[d9resUy+9$@)BTW~J(=W#|^'%b{U nuZ@0y]$;Z_iV1;} g8YF5GX.M T:-UZƏ) xw ?=W(QJ{Ҳ TR:vpԑXmg\zHy*`MR]`#cKL )F좟f3rZ>n>CgΣ`j=,1|̎n_rBȬn['3/Nbh107{x6sAY5?b%IXLo`6֠Htqˌ,#z!Uxd!RԔa#0+Cʘtgyk sࢪnOg'SUi\٫{Koo{~s%]M/7g)7XH>5fhp%J7bi1g;Ja8҅ܤ u\7in۵ǀ6V؝,LQEoFg1\\ f/5˺>`BBW,%/nL ,vbI[.v sYqDvvL׃NM;'-5~UGS"|+Z@ϖ) oLB},q(u:nG~O򏵈m3G&6,P u-bNxSFQI]/mCز lgNPiiNDtNz/WnƼ)quP@PTџ8Ox;rt PDN裉E+M0( I@v^@D(GKi).0EϩiLgq{MgN"Y )Uiut I9^8w-_݄ik:V0#(6cFLaLqf'|yg>tu a[cR;@(^iGjdRA1xm&`y0P<`|e_C_xq|<La\ ?C o"Z^WߣQJoj仝gL )~'ǂosZk{kW6a5o=:RJ\f gUpqStd iYtf$ ԁ7í%κ^owf{g@|/DsO$U i1!`܈5 cOpLJ9=@)GCQ{0rHRIp`l #52ǩ!aP quDX+n'c_g(Q[wcs=dTmϩk<(N~vQ,rFBBe,YJL&0"4!g WOy6f3)+ucøg,2f60ܐͬwHPB!ш`N@'N:#u ;+E|s bKY%6: 4QRTdN  ~XX)ZE酷ۗ `OS C^..Vv:ؕh oNd`.ܾ|!0<2w]Tn#x5"SE35Ɉa: bA[_Oѡv*Z7"Dk#L8AD&Aka(NJR^J 9N#R1΍<[֑a ' Z)"IS)$k91"hlEC!h19?(6ᥑo?{zy"I"ݷ )5.⛼N+ DOT2xƥf*d+ A=P CU@3U ׾s˦aj/9vf1|%gS4a`Hx)|0<Ow&֯YI^X&}VCA*1V<46/՗Q( Z1(uk 7D zbR?{Wȑ_.e~~[i?\)=ӭVjs_0`0XFLSz!4H'#H 1 <,PS>! n/WEuz>ٲWj7V;2'Vp zq*1TLdB aQ^6(-E,-"EQ <1 <0n*/u)i)m>FGυV()5 ; D锨J bO57 X;g}FP}]Jq/-\6^?tL J"D (Wk+lAʵ`̴e.6yJDjkJ;a( xY,)6'+\d6@h6+xʣڅhp`2!JfQ -mm"a 1r6L7;t7@@F%ܹoO]vu =S ؝3s3q7hN`O'v';W*S+*S9f[:uzqF$cF0W 5|FW p%ZS ,(9*=rvp +&5 ftL>BjzWJK+.ٶ+\e5=*=rUB W/ 9z8s |ZELub$<;N:YCk_+p04 8=|9wDA%)0"!N`CNG>k`:SűtD /椎2p:Re&WSL<#L&-\D2Rӥˈs$irO߾)1cQF1ÛWj"E9 gpFFB0x :sno^V'τHdZ ߜǂ .2'%Co.S #< D^of6 ?*l0R"|(Քb&W&5#/W?ر;OkIR|p_b;'jS`ޘŷTeXABL֭Hϲ˪rsleU!Yz2{*zQy oDn" .%)O6%ONx3 ,"A4O:Q V;M@<{*Em18!a67\Dp(.(CS0$@á?alQjSHjLIOsvB?,\|k}97=ST$䍣H'',༄\C' %'?T*0H|?_6б~\˟w[I5K8o~}_9T34'8GN_64?І[+p wjݙ4!4srżpakOdn9C.o2{ŕ&o'OIua1-2bAQ8yN>=f;^189c[TFV?u:Ⱥ֍UIe!g4) \> }v:9`az^ө5nr7W׷~Ϸ/~o?P?\۟]/8szZ$s'`=6M=7}4װɣE&|Jk}{ylo-n|?tG~.79{"'̸5'5cNea_Aͯg]ꢤ TiB=PBFb;A.c> &\8:JiJtT2[AJJ);`8͚2F߬@y x?}1Qt:$.њ X?|7)l.A8HDZM`@&NWiyp+TW ~D*h綎L(YcpfSqr>d]qrDYQd%IB$)DA (5hAQ<bY2"j(I"4E ]9CD$)P5Tca1r6}&|L= wF{w R0'[_t4 Fӷׇ7p掿*ץZ#FgN\mkvͻ܌>\)mLJF1Boqc G=:C2W_([غn]LZ?Z[=eزEZ7v>Ϋ zkz=DM﮶ݏy5?͌2s}_Y`u9FP9[Onj]r}~6{0kOСL>ts>)lW}xVeo *gqECdϩ-yb?8> RY_Yxbѳ Z'vs娭 #jfidOp8V? >R_.Ļ+\Hx,'(pN3&.C x, #eXvI"i 2O}|\k҉"7l2l,/eip:Q6g |]j{7mrb.cP"F"-x, DXL"*#ZXtnn=k 8l :%*I%MRM#ccGRA-򆱐XW,|pH5$MnnhWݍ|UwO|uGwͫ(bsSN[?12lr'!(fJmNŤ!;#vcGl79XPձ-jCè -jlPT;-(BTp= a 3eHu!@<3QYۄꑱMR-<nj$"pjT,eFrTQFI{T(nϜ&x崖ZN'4)<ßz%n앴guV:ҜR͂Ld'S TjdjWB*a>YbjHMl>q?)}ݻRt)4RRW]DBCIuܓK%;rJvtQtN:7E[Cg}dG󗐡]*tcuFǔAr$3%x1q]9#2LIfEy:鉶6Ә3:i~|z]Wc{3[ts^W}/mz*0v 3e-ߴ|s|C$.}b"F 5A)gRX\rT@ dZϚ (?PW/VX^OQbPO5*⣖//K%2f˘cͺ:?0cLL r-?ܘjWNJ-i8M]9 M`\.FOEcBxt[l[/P^ [3i«=PmGk=lp޽=tnO-3{&LF7qumf[}5FeRe]cmpœNfhB:imj{hŧ>lP*)jԞ8y24`}|y988C &)-$ת)2o|0<2)6) 779099&1ilmzfŦǪÆ!B BzzF&2HQ`4j?})k/8]?jUL\Wq\| L"`, M3/߸޾ғ}.ٷq_^,N-;Rg-ךRJІH_>煔D^۷F{m=ݞGa'$x}9YJ/$S' pYcZpg!& :!֤v/ ;ψՉ'_tۄ]\s&ZgsLҜ PZt\%F댷5.#hdE)/׍Xl%oX%QyX0z6%狸lJi%;EJźo伱1q8lG(ܖ e=& C$2L [NTLt9LN$/c&W(9f7%K(,FkYxnRZs KSYڡ?!뭳Z@Aw )%aimwt~H^˝`G}D}@J թϵ  "УϣKף!7v@Jh>S"jp*e{;Mc-퀗};L58L+X6xI sYeS>qe[KEZ 2e䲈 uF4IU -p.S[!Vgrc|}l*羼o[!Gf40;g>j 6FGYQh6Aփuq8O-3欼@V<t.Ҫ, qX-JU!d81CʁڍAjقW`hWU;vYxlE4"h5qEY)+~F%P`Aٔ\+2}.%׊K)%Jkj  \@ͩUrtpup\z}AסK#a@kE 3B$ k.L@@2҉X%I18z7 }JA%{ߛGU _(Aˍm'(U7 kqt"qgEJLbMFhOnrhJ%1N w XkYټ7Sx+chP9޵QE/b~nJ(N1Bh ͬh@_]\iN obkSp.0B`EǙ=֋~}Zi!Uح4p۝m}ww3Ӆ B4}3{ei fz,W5} qG7J Z.>dwl =:{;r|=OZCŞ'>bTLwTlץg1+S1`)&"㼉j5rn.yˇ,^xN|GzoBV|m}֧oX#-oÝԂ &knlXG׺ܦYqG+mUSj=ASY;qwĮUM:;gwAFJo\m.Z6{ Pl/Ύ2qsö>.{5Bf5I(,U# 9^eQ 7Ĺ<}Λmd͝HֺĹHB%6w.kil!&LJ=cR˝ CDÂGF;z;F.5V׹EIg^єwP/&*njhBՁ/<{0pf1o?O7n\">!:qnv|W}˲:}ԍkά70uSJ~DesWܤvMlhİVl8%0#yL\2T[fKhťs9.4B6{Eh3S&)QV@X (;VQzYEJ&7 d\$1q8*jupU4WVZxpPGY* o'6o&F؟D.{TW=x4q68wa0ޣ  {?a@ 6ޛa/wk4˂Š&Ͳj9\Ͻ0DbL30B;#> P{e80sM>on𯶶pڷd[3XZc6YYmVOk#8>U`+dmťbsjclgTS(C Պ䊸.dMuѫhoiI;|ϷkH1]"v*sr&%VGlJK&| {ˤLg5kc.{gk~v UIz$,yfj/W|j;ХŅԖYNd;Ȇ3RomBG-u6z<'X㨪Z0?I 93F$Abrd{lÕJP:U{Ŗ?qUQys+ -E5$B@Q`@`਌Z̔ln`֪ȨdIh=)@ebG"Fҙ5a(]FٌYQf슅2 wM?'Elܐ$OR,#6  $$* 4,K#E(q4 E}AùX/,qJ"{.xMMk d (̥M'9L ~khSAjڱ+jʨ-;:JUfi@Δ 8>mU;E:aJm#1KJ f6LD#y.ffL:ifD00g %ǿNTgYZgi'oigma0|<5jݜ5l]װU?a3ұ ]ljcnd!Sl̡Y#ȟoc,)KO=R?O/qu' go:6ls7ҹ܌-M𬘠!ibxNx+LM`Y}]s;no\ٺS*-8"D vұ]W;ss^i{xRXԤ{p8㭴qRao iiTbe!RMTz!'lV$]{oG*]lFwW 0Y'{n  )L2IVݯzoEER-pWo)4 kMڛ909&|Kbkm{K ^V6枮_r&@9yj??b|עk=G 4SRaa !0)P5 D's 1)һ]e e)w@śvOQ i֒'/40-NEʍoMFq%({*? 7Uƪgz롘UC3&e*@"`W<4qIȃ "ye2ۿ8@6NF 儝ݒ+@uC7gcѵߠHwVQ \s]jO4*̄p ]{uwQ](U'Ϟ 1Z8gr)޷nm)HF+vy$8ܞ~T3FP 4'!_65)P׹]r OcknN[̈_+ Gg:دn괯f3"j ^vb dH# |Uð(z1nQyIv =پiLNirTF֏:QWU+m"g q#y`~n+EV>`/|lg5U)S9rx|e¤s [& 8hNλ0!*ET/*((^񁓦鞊!芙nb"ycZ+H{:eLbX?ĨH QMB)x{ IJZ1'u]>EgVP }o}88]TwטZY|>v>x:+~)mm?Wp']q5VKbSBD ι+506"')&n%jƺ/1ғs}nɷyT=f`^Т"Rg-b"& 1AWd2wxam>_3zo+ԒViii{CZs/|?>:%)8BD Ǚ-%L㿢[GVΎz4vHfV#1o"P+d" z,:É(X)TK q*msk6.3 fUئ7!yAcM Q@^&T&@ Fr\*Pi歅yG%ПfSWɚjaQfΩ7KmHiU2/e-@%{@=; M훪hV훟v.RMҽSQLq7rvVyqݨ~!EI")ˣ\( K>+9r&-7z曑>fW #p.ҋזqyj gȓeflkS"lI.Z?U~[Z/FW~]/j|u[^ A%|/pֻ~hUG]UPdu߻OH͊D;pRCwp^9]7O %I >"T@2`VC"貟ȋqamCR=xZ#e"9M/zIdTIh!d4FY.Q0qA`:EdQԪr))>]ZύEm:It$Aψ9&<խF*-JbNB6G }nEׯW# Uᶮ4шtP;,%nQSWh[d#[/ڈBH N)68sO PIzW= )@mR/Qz=j(| 1pjzE2Vie҂@7B4*QǸ ָ y d$S86r6z_4|*\^rN0X﹡*~o3)jrD/ʴvc1zk"=[VbB0EQS8rU& uKlρAX(s>}3R\HErJS  曀Wз7 {ހ>Q`MIy' !Kp8߬t BL`/F<*b\!6\0P*fCX!FΆ޿-^xrۿx8r޶l 7wꞙJ W}HWD$L^C`"c8Lc)S4uPHJqhIQRohD0*,Q) HoQ$Kh֌IEJغC#"q^^i{rdgW^STWfy R!=IBxQ T~i;W$Y!8 RY V6(tkm0vS\;u~մ,|X5GiS q \6x W:׈HD.-RS`a+kID}#ʷ%#)8*4ZA9b,($ j909d/:_F!:ŕvqdPBդ<c6dcmmp7P0PN,f {L.5ҁ!S;0d*j:0 jNa ,4ru**SıL嬯W'.$5ЇWO *ՁՓeD-'Q)ɑ+qqS0'$`&ɈLT"2]\e*ikWF*3խyo{a~Th|flpް; 5f;r+t[Y&O׳',J:[w|[7~Abicĥ~2&`'#h3p25TRW(h +OI\qNa'#2dR+:vqT*Nqŕ|*4㈝aR}lPr1ؼh[Ͱ=hYD /+?~},ش/gYPZ\Pv$bX!%UZM1@'D5yMh'6yM+O {bLi6XmD<&OmDt4BG)rJ~L8@&W_ S̱2@+ d(uMZ{ޤ7iMZ{ޤDT&4}(kmF_EvH|?ٛ`\b`dvdɑ8Wn=,%YjLOj6b=:\աPvu(:]ʮ }vu(:]ʮeWCաPvu(:]ʮeW,, }e^EǢ:]Pwo-fڲE7O"=e-kOe!EzG^|4>03K^ !bNdShTX3L;[6qi֒ af0"+dFS#CF@a!3hRqrA½Xy g?bVAYwY(S< E PҔR/,9J1蕢Iw [ϲ'wЭښva$^JcAGp8IH[f" \Y)XO] $zU^@ɵT9puE^EqXe|7r(C?Ļލ+I=K8ofx_>^ -Y+44`Zᐳ=?oW\ (U3խ{Ulfďλ׷ӫepikk?LU5.vPKm{AHHr];Gn5T0ˋ=2"ᨔQ8yN>f=YٿO7<\{dluM6=V.D$t%.!w};Az"~NhoTm ͆F2l aܪPT[0t- m?֒8$M*ObDAWC+H6<#e( Up 1!Mvoi\CDzpėu:!Z-ébRvN0<P=?'gDj1\m_c7n4+R?)}Jo@zI&aJ2-q2p$hTW~6Di)uOmd?NB~iNy#`{H (IA*Gh!a>{q|CI|FZ[ #dgȲc c78Qԉ{`O=V:;^[Y5m?vϫ_ZԉpN_H'x!((^ IsHr>dҞ\9Hr:"‘QDlb޺Ol$D l>%}]ihއ#j`ۥ`4޿Ctܡ|]db1>s@Bnޅwzt3-KV3I‡Xe 39C22k.?.֛Nt:o7f!,eQݻ4z^4y\sZn^9ئֻ3GôQp!ꖎXEtޑs[-=C|itWngɪm2V~ϧ.VM}ռ)VehԒϣKb* 1[Pٙj&@./5pP*m$ANekl *ggϋ1~c-y愠?x> Wr0xd±6r Q{s|ɂf0 %Eq8ө:7՟.ˁć+G(KU{Ǔ ;OQ.圐\r,q< V;"ۭ$,SLD ǿ%At,GьKnKZڮ9Y4^yNᨹuQ}uQ9Zg |]ElU*a%6E@2JBD#yPʈDXL֩R 0N['R2L0RC`RNGPTq*p Tے5rK(Ydak+c_Y[Gbyv$ԉO~ՠpA2%6 sL LD$:MFm#ZsA O9mA0½geɕJx%@3 :GML؈$HXMrx*.hKHVǾR[,E';i{A$ 5MPY 1q [I*7 ԐRZtQ|P"3A!p9V8I vmlPk[^A"i$W&)-Xl1HD<P1dBsem8㴓aG`Q_i~w+sz&w,Ɖԭc>+c;2ЂVTJG}!U ZFXPUEsu$/,v^XK=ړ'"U/taP FY `)%dq?d3^Ao7?f eKu|WG}h<[>oU-YQ7e5z͕.M_r))0K㛎oNo tL(0bm.C-h͙DVLIǐK<՞J!p.6&o2~9.ajzJۣ_)jyAӤw3;sE8Ο:xbX r-ߘWNJ-m e;ٝȖv6d\.Fr$VL/Mje&ɵ[ul «#PoG[ror#?l܌k[g LRz7iR44 n[R&2qvLk6k]Yg"@'}37CS)~΀YIwptܲ#qc( PXb#& 9"8Q`%7y6{rV_WqQ+@)} Fyw؛bA;z\JAjRW_? GV}]eq*=oO?[1=`|,6__K`zo jF["ͶCZυ"OE%3L )3)W@«2vM޼s ףA!N#?椧x'!JqtI;S䋒2E8RGm*,#3ǝo^WW58 %, y~q(3{8nz5jl{wR6n\{?To)7^ΏelًUx8ko{ա*PjEv<'fpiԚ_ oMpolg<2*LZMq4% Z1!9%N1_8@#DN h *ő)cL媑Kb AuBI/$ETpY!3%|[qE8P^֢́s.!=qOs;aOXn(bM%)iΉI"h\ʂAOA 8NU11)cVFo:R?M4x/7y0/Z5眵v37#|!0-2׃m$|50"@!NI (8mw<_gqhQS{.ZWlZ)B$Q:BFkν2ɢhD{.p&)N$( Sk e)V6Fl%tqOB&h17jo>]ڗ}ui6qYWP! sALcjb_e2Uh38dX邌.Bj+ !M,<5pphiJ#A2lm8zdR -=*A!⾐*=7Tesm@UjWW%%f@w}`p~_OR~N^W{zRIVr*||xZA<0ՒH@QF.1f]~|n q ItQ| /"r3;|0Ew_޽yl*Ru* D%,7A=M 3Lji2)(a-ᒡ|,c1Xb>󱘏|Ykf^\??tږ;|zGzH wuy*y z}q JXFVLk^%rֱ$'J8q*}_-r>~ aj5 H.hZ "9D>Qy5Jֺ:%X8W#m}j+zG6|f_Z /GrGrxm 94yA8`MU v꼪y 7ȫހiRyz0>WF5K%m\"Pa*mxBv|fMV3y$7jfIHD%"5Q%0fB l63b6@hesj}u=g3TxQM)Q!DhZ" 4J"^(8T`Q-Y҉RΨvOt u^<EV\7U|0/JTPMӀ>G{: Ro+$g:'ġh:!unWnB]l7)%x4Su6F oLFBd P wT*R xqrR7 >u*@Ͽ{\Uԡ1vM&#>5]#rǝmSF$ea][}0ο&^w̮ԾroxZM>t֌ 8" ;&gJݝƨbTk$䍣 #P(7IdD=py*q>|v~Cvԝwk[+,̾z%Wg Pi>:{[h"oG^q[H~\@zYyiΤyv3⯵+?~_N?x FB\/~u/.'sbnE݌넝?^BD6 ;]5݆5F,/(!̳wAw7ݫxIMϪ2 3q&#_CJ?g3cةlK?&u߽}Ƿ~1|GǷy?'q9#qCO[o-h[n5skSO'|J#7(k{k-@v~xMʧ\]l':-vMteu0 juuzT]&D,_ZhM~i\Cfpėm6SK1[S %ԝfe8;y7>?ƾKgQGE Zsa7:vt`pRNdH"i5U{O @&){&GBsřHŨ{nL?Wqg@KsP12tpa" x/^;SZ9oPHkkb1Bv,;]@$O9:.z^ux[Ick?禇'UλJND wJX *TP1 ^񁓒sLre!' "!ds`A`b$(HTQC&c т@,&EʆGP 8LD%!(O֠a-cֆ3ͧأxxW;~rZ9n7D$fo=:'notUӉQ쎮]un`m}J֡SAoum3u02k?&wtfG ZmͺClYajZvyxwSځ=z^j}=dfW{ϼϚ~NE;:抬=SoqTΗgMu6ӝU/"PZ@(Xa &-¶"JqΆv,h2IAl`;8jK%8 A1TBsh$_yz[cA?,[[вֆ ݶAR$cIM4 A,:OQ""B5S@肦l@b0 wZP@ != uM,2%ǘ3Su!@5솘hc5lY#ʢFlCYMIN C5 $( X*eD%Uk2tZ:c&Я2'KvJW9U g˧u 6ߴhOݺti!OM_q)(0KM qC 40b hf Jq&P82Gb% zֆ ȺwΛS[뎲znb;P3 n s41{AZ$Z1ΌiT!*PdT&~^_reqE9ӼOZ^,1se@hu?{t핎X \谴9:M=|[o{'ggivhq.f' fʸuy7w:">6I$ۨ,@XYPcd8\k &yI~]2i8 Goiұ^'pJȝ ;x*Z+QvKP Yr/В#]h!q8qUkwE\j֋BʹN\q$ V 2 98Ƿlq6EMK*Ce+b0UuU1CW.tB[MoŵR|&lڛ9AeYJ:*%}V$Sdi&gLKdjV d셯WSn4%M lZ0PQƄєW㇕-޺?=V)9k}PV(DQ*6)J{.DQ:@ki`$a[XgiWR[Y&i Ch8=3~w) rVȹ=G蟏 TgIzsFdX<{)r^5)-W+(Pr&ak7}{&ʦV˭ūR]Pwx}"+dtYp4Mh//5'S>H4.%W% < Ĥbh59Ze^i; AS xh#ExXsI:Cc,5$W Z>{+f9fv ؤWV(]" d Gk8稍d8 o.ɛLĶȌ0:;Hy2vQ`FU2ǀp3kFiU, ք>yD 'V#I k8*:dcc[cJ-E~Ϝ?ٱl0mLфJA]}{f;P"W¤jiA&+Ô 9Yy)\8JcZmj_-Z{tc70/`>@0dYBʡ Cp| LI!٩仴%8*jP>_byӅp^o3|sԎ?<72Q:cR嬥d&Cд?Cq.rIVt*ry[az{KȬ7Ґ zK{[S^@^o;fSFI7")b:Vȅ ,,T.Et(L8Gq?iclgITe ΕVT!vN9c 71!FΘX-lde酵YP9dT ZdI'*E+t+P$hդގ.HȚΈ^gmmNdEcB大B+dnۤmMZ L۲uC)qUTS|rdw y72 >:xClպh bd<}FvF$0j IFTN &RDm%δ[(oDd ɗA%B!a𔷒[%Cv AXV`Ljg섥M01M*s/$Fg}?3ʁSKQ2lqL.o \]NNo?K=޾LXE/C`7Uj*UP"Y9h*6&Z%PfGZJx6<9c"̞Oz묒1h%ڦC`aޯ.jL+얳\^ 1Zpx?ߐz%A ArP`Lp3\3DTfJ]Ū5kR;uᘆyfi] Ai#)JϬ2] 0.`#]0312&Uh9'+@17X#}rpbI/R MR1Cʒ2j+ALT llIxd*0\fe5r{]H| ٢^x}6:4#j~\b0BxJL'G#Ӛ;LERyczM*ZDž8,מ^G#3R.V98XZwV%%d9gHd&Z AIGLsm1j9Z x4NrSŽHj)j vEې#Xq b^\|lr$-ȹڊK΋͔[LmKMZ. {b;5/ N " M<)f5'l=]EI0k&sN [ /mX]ϧu#j^7L|U  X_N{z|/Q()L #-jcȜK-h6K* Q]G3bykt)xr$,pu 88JGi(].˟d*C.CdAx< SsV*҄ %\)-zx!u#˿+m@yJ{Dc#0F  /4ROP¬w11Rz$sx!C ^qkezC?sq)c7M??ǩdWcIo>r%$o]X*0>s!u`ƽOrߩ}O/p9]8XgWA+rOݗAOVخ8h ~:;n~Z[ӦfDc3J57X#42FbA|6ONO&7tNMK\۪`7:dSM}Jۥ40pʫ$rx_=NwB1z↗*ZO_v2cZo|?woravpwuB#0ID5&׵u?~{jڛ7- ܥil׳Cv+\'a[kC/ 'iuYwʦRkNl_!OOƽ0{EM/Qe%XCa@/5^䪍qN3FteVT5̹YEw`a7 L52H:FfĠk~?g#7mE&W1cf ARErtaFFς&sۘgÔΨ{j\>W5'8AP) MaDƙBQ9 afy/ISo6Zلٚ˷Qo1ښawv,<2>HB=8C>l֥NW`F[&$o?yn|HDi  TĊ'TBlCyH֕z{H:g PGb9&dIGB 1zB\="sUHa1b޵6[")vurx3؇ $ v&/ j- ad]d$]EQX<?%(w[ YIF%S69k5S8[} J!k{G}دqߜg6&t!8:=n?WY؍F,nV}kCZȵMrV76]^?{)s غմq8 >wG[VزjfݮG=oI;E -χlza(8~tCM<:zajt[ΒUۼdM] ]kСZعu{#ͩDxonth1BTj"U grFFB0x ݖK)?=㊾]G\ dF2\.he<3 _3 H;19^&.H~|:5n+u>xCdS4靳<  !mQ(ò/(JIs[Wy#8ŸZNѾteRlVx5ǫƳS8X.6Ov:X^V,VԳJ[mstbNcP"F"-x, DXLEQjBHΑ\HjSBOkB2N AR-n&XLȸ ͌]BaApHkO~xZ-o== F+Gl>"QZ@(Xi bi[pq8AgC: l`;8˰ɥ *vhp|LįٿB)q6# TP0jC]7AR$cIM4 A.:OQ#"R5S@[٤ĢxT;-(!E{bM,2%ǘ#Su!@Gw!f`<D,6?vEDYe"4)$@seY㭱VHɝ72z˹!ՖG ,"MI=ýqIRQqeD%Uk5 egG|וI[>]lz.KOosy etzszC$.}b"F 5A)gRXZrT@ dZJM^;YU̇&v({U [lLn(~6_框w-3fǂN/OG/ki)fS̕u-zȊ) +ڲJoX'깼0LmWN^J7 %4m2l/dնen=o4:;%[1wH[ ˠ4uu#:y(53Yx}`c{C^օӨIgfܿkT5ix`׻Omn ΰMNyWd:M$zyy-Sͺx)K,D vұ]W3r54uWŢ&݉ᔷᤀm i'k)$'BDl$\zHȼȨBLۤX $(_4`sdqsL wT8D:eicq}zBzݹ4/j?{)o8}?>8m= s'AD*ιYRg~,wfDz.wxspˎY˵#C  4q0 F y YݳmjPb=/8}] Fy 9w WkB[4rՄm~n14}0,SUsOmy-ó7nO'2EYrB[3ŶcZGbZk2jl_StK1@?я!xmЦϏ)7x3^5P~?S~~KZCaс),Loe#^zVK<Ώ _]k ^i ^_".?,6M9Y{麂:ay7[|բƆuyzzaM;lRGϾ`fw{rp2K?ԛ|Dö%煍p6G_ތGw^ꍒJ\𳁫,.s,-WYJP\2[z(9:\=Oࣳ牫qyҚ#gIb\gjWO5#gWY\WYZiNtp!V ֌ \eq99?uR.,\s>#k}6p5\ 5*K fM•֊EG,.?{TcetIs*=".$_ƣiӀ9|FTUT@zm4;'+v60\`5:Lg))` ´0sg1dqٰJWO?~`6PcBкxvRarA3-&}طA3 Ca0-a35bvⷩ~=xǗG{kC\FZn İ8~TUxb~j|)cLXt%R&$rncP B\ߞBlior?^kc| xÀ%㓿s\Y l٪TLVQUr;PKmܖ^DE% Ub;&hu'+KaN@e-`k].J% /`Q@5H)cH1E .u2_uH]Ou40;D`åDphiJ#AT~e)q])$E~EX-ݝOꑛV Hahu[&00EEl3όl{_L-2+99-׫l_+ &T΃pC4p$wt$#yH)D78;N,+F T"d 1[bD` 2SwzL@'+76H WƁ>PBZA=#`ya ;׆f4ɐDqA-k$}O%LF_>\+jRH:t=KNW^HVjUCE0^5j61*:(Ђ@W瞓HL(Em$pewxpVfseLeh (+ٲ*c rJ߸DO#$ .+>^&E92cF\pCHUTK! d|ajm [hTT!0b!E3xƿo'Kq+h;,m)'1B$b@Q&P"#-Tst1:I`{scc I2&Eq2%D^1%ᩴ98 BB>{\S~Reқ/߿_.D^zvL>:kq 0!p   &(AsUbUIPy }s)ͨ+o, ]ƮZj++&|YyƖTGTL.>,MfV/TB;yGq.\ݖR\!cH9Qm$"*NYǥ:zi=&9I,QQJ$CfAun4jLQ(Α" ՜T[*lX[[n8C|#mp}i8|^a-2űcGGbG@=;eb29uczɰYix U)"xb*'y`NEݱaqjXk{*H`Q"=>X$FLQSH+ OBijqNפ9f&;>>;jWKO "bޑ7yM*;q Cq]%,x#+5W9XEt;n9u_d_D$CX(B hR*/8E )ցHOT8c*⳵Bfj}^Mpfl m1Mp_ؖVOHH^|W&q΀V?/DiK%m\"P [r4Û,Ͱc^ڮ4zDwP.6)63.2 9=0|bTmAt3Y`HZ9R@pMٻ޶rcUciv,nwbE!m5d'wx#2ebm>fr0B,FΎӆޅj8r|;հ{3%~n]7*w=Y Po|>4| FzMd Lh'jOR#GKb<59; E98XJ$ "ep&Z #fRtTkdP!S@\ ץkqW`"߽Xr4h :ETn+Z4Kmɖyu$ePn.K7 VBTYmYEIyr#|3<`uu=dByQMё!D:"O^ 5El=> J0kK&Q []{1_mb  r7:5n/NTRC|;3|µd3=LPiD6z~Q .& 0"Z6LY^$c E?PXZ헔GACq&cc?N73.e* yB SRR pIs^)~,˿slYbhJ{)A%DU$ syM$ *Q2)~rĴTldL3)LoF\z3s?'.{'jM|G;ۃQ QGD<;uŘ0$1M r +^3Q_gO j~GQtye2rhFqB *p0J('䊜k'n4w{2<=[M~ER@HNDRkȵT9p gSGbH8DN?~ON?Εtpd>-;ƽ~%>}TY+44_`JauMԙf޵a Bl#Ν>MKw&1#~~y_x{=>x #s91Kzrۥ"\jǵ ןn/BZD5$.a(b?e`ż@mv/ SZ7VgljZi9|g<|:\WSzB;M=96TpMcRY?y ݫ \~xoo2s/>p1dFIQp鿽i%-47kKӲdCrCGӥYn5q};{7QOtZ욨w3}F6D;:aZEM_W2KEA h:!5xdG|Yب#=&|}*fpԡ$N]i}5?kc!~FM.eЖ~F9_NV@x+H2 ՜@qS%Uꊷφ(-ZOGfY?xBviy #`{H (IA*3ۂR`B:|8%(|[wF"ˆn'P$#Aw1`ko~>yɾ1y;PafFEF!7%7_cۢrD1,VO6hDTDl%ڧ=b 3;T`U sK!$8JK,Q NmRrЂ,g|>=QEE5&Ԥi*BN\X"#gD\/uLgVɾr\@ J+Ap.+<֪@V}ȭ1g{ YfE,NB ϳ@\@VٽKPng8NAKgt4k(,.qą|h<[>o&g6; ^}-l}-RKry $Zi(j BD# ,5gY3%C.T{*EYrJM[Yeiyp[n}m5ka)ʾ6tПM5|y<|Y3pf˘-c>ch-633VO3}[#K0ˎ;Ǎ4 Bc:Z0+ DHF 2`:3wX."U6u݃Z cA~b N:A ~w5GOO~ʴKLy=LTyĢ zCZkykWk6a5o-z&UEZ-jVZ"iHEZ@Z&h/*O⠇T /{JUI$*RѶA#Lqs[aOZ'"@Ŭ2+TAUf,dV9.Aɀ=pW$HE@RR]@^'mmI 5Ƅļ?JJ#g,Jꎄ>K qF.GUNטrn\%r%'H>h!@LK8!MkS|ЎKݹ ڊJH+O h3Ŕ{"' N8sOZLj&%%ʠ"GMaSN+o9g  1e',6L9i 6PJ3.Y >pJF"(J.E.Aj|4r)js׮{fcgȞ/>P]O7-T-Ufz,P Ĵx :<˃y62hL h5 ěLW7)\ͫ{7pp(x9>`ENO\8ryE$ d *Q@)S#eǍ y)Y !go^WMA>ojAG>Zu:-z86\ |X]lŅ${lJE^w~5&;ͬ& K) 3 J;:I)XF"2[TM$Yp\3u\[/, @I!NOP2t-&#gLѽnՠ ë<׵ ĵΘ)).lfg{inVR<Ӫwy RuxVz`` rUAK:6rlVu|05u1"@`&763sD&F%փ4%P:[1rvs}oSbE`m%D+TPIS{2dYjPiɣHTKNA/P|PDhB҈ي5@MrR=& c>">1\,BMCVneroF5퇣1Aq39IRܪI(XѻB"2yd5? GkJ%xsx}^;@/A@rH[,,+ɶd$˔%; \g3 ]X ){MoSH~m5Π~@Rp:R=iBV23ШeLX.9}sű|0g?Y8 !*%`BTP;bk/8, BٛSUjL&^HsŲqA|WOh]㋗?O_uݢk seLƱ &q{ -IYęVQ9{{#)i j\ o >lV]bK޾9} 7V/|<5bw+OQoP# k|Cf5@ta*2.!($o2`ǐ>e ͌ȡK6Ē\ h/ߨ8p uH£º >8O> QOȎ1l;c#E_OGos@}7=rP.: @>0G :Hh p#twL",1$I* hrHl+|O*R=ζzinġk >Q`)4q)Diaj >QJy\59۾m_SpWӲ8N?$OWy3*:(B| 5ȂO ]ޣbl:;&fw 2F-TJs+jFwFBRVQ v>o&43/7$$.wx'4/_s8EXjfI[[$׶k pyhIQO /`UlBqل☳ _]:_ Ǡ>Mh0Q9Dn˙kҁ1"%mR @y%JOgO#{]8B?Ō.ڭ74h2$[eoGb1b)CpYJocb'/ٿl!U3Y #^4CEZds&V :0f Xn3V;LVpY[z-5 zxw:}JUu8>+u%뻙ꐈ6LW y !(N&\V chczU@ϚI^_>:MR=VsTr\*\ (ړ-̊1<,zƈ kq։!Њ1_NcvQ=f;D>% %Gw F)ՑAI<eQ[up>=!#2dΕl+fB$IZSB0L(Ed>q%?R~qkYq?$\8s񁬹+4=n.!|Te\]LN3?].+)a+??&dɊ1Y_RQr c828dgЂ$㩟LEޜNi`XfV.pK'L62 qqWV\ď.>YF"><ğSS9-GFOOwӋ˥5zu),ӸU+M+Ѯo״V>Qv(dOf͋~lu]|0I{7`Jw7E4F\O2vpibN;).n#:ocLmO_I1-FQ,+x4Y tk;*.]juw3)[i#ƚWt}=!'#loQ ŚN ͉u6.9M?_??X_?}_?r_V?LS,u;E@=G_nM[ZֺS|V;@},[voVpF$(~hS6VMt .~&59LN⼋VHT[.e7n=]n)w3F3tXlN5O8+obqAF_6ص1Ǭ L5AR!dN#:Z'鬿˿mԡ ::3[)cd S&K. H ,Z2W$ Xu/YsӪcyXJ#`*Q836k+ 3J{ ^=Tv7*:ڜ):l9a|Vԑ[w[3l>,]Gӫar|MǗ7g[HZE2gYmJF'FgI/w=}sH7fMVY"]s ڜ &$pIҺY]wh>8]7zk\Ģ{-χxM3~9?휂w~uC,l8zvj.t[hwҶg/]oz2-Rn>Gb߷z@9Hk՞ms3bv)ۗOV0مGe "@Mp%M=b' F综V@ВV^Ӊ^a[Z~Gn1́rԾݼ ~W][MiO.ѝ/g()b8 Wcp.BJTE̋$)_B~Ll S:y~xM8S#X +/dv#װ8}R;veڎJ*.q뢢L^ *Ϊ9WNjױc^\BޕDCmxo mSzG'y2-/.fxo"UРt\gۯ-?kR-jna gAqۢ-s agf0-ȍ]v0!f5ƯQ\e:ju(p}LJ(X23r+"2;\e*9=W. v6 oiXlN3oN7/Zq&)~E1G&j()-9)H6x)B^Qǥ%N``4_"Ѭa QU+?C >~&n_Y _]$(H9w%X#KcC*s=\-HɳiR Vx{!𱻫0 CE/jAÜmhXk+=4 Qʷ%#vTa҄dP:7WL4w%<39Cdߪ9ܹx\,G&[ 9ש5Yj4aY ixI:bLJ3hu|RҗaqZXk{juH`Q"=>XD3B5x Oh {bQlt;+þ>տO2}56+᭱Ej2[M&WCqA:Ȯ]طwTR8|n5g;79KE=qՂ,ּK\c%IIOzV-\\2D$CX(B hR*/8E )ցHqusa=76pHhP I3⩣FksN?">v03OY[ ,-)*2*?i'l^,XD0%K:QB3#"GJP H##e{eV8Q$(j4CB۵Z#gMx{[fx)8u"p.`zxX^7I먬BDs(tם=\e WItC-(QuBZܪ"I^ @˴o)%x4SU4%IhE-. Wr/{z JRd4w{XpB:\{U1Jط!&U|Ɛ]qc+ if glͽ_1@ԯv9n'C?nq-$.TTM/yfFX~In|w;j. g<_O^v.f:3"dO*`&ۍW@Ȃ.4'{^ [؍ewC+̳w~{7ugdpN :WF:^UIe3qFroAU}vlIOiѯOB69[ҨDnӫoss~|w7ᄏ\| _/޽w q IP[V6Z- ?~^w79:ɇN]N2\B_Aq_Oa=J"_ B4 Uq#>c,ՑvF.}(eqTPNhX}|1 Xt0MP~F5<ںDk.2ۅ^E'[ tN$VSŹ),<8+΄?k"msY-3y9:*FF΁GJ"AD TH,&@X\3CVqt{%.ܷji'FfYǂG˵("IpύprQo2a.RQًHmP"M&1glv &FIRDPjeʈA,&Eʊ)(w[ $J@mrpkPSɖeYSy7= wcA[] [k48K2*UFl+nvߌ6LS[ȥ CfAhyӹzxd?8.j~zy8F'ϬAY5Odp.G-O^xqig\9fq{˭w1^Z(8MTuEM4:oǩfFP9aǏk8Y+uOX4IVER5:TQ3Q7&H|Ǻu۪Ի;_gr JYe,i:Я3fzi/6d%O5tGV$9 5Fz,Okg<)FsBre`53(KD5(òk-((-Gpnk T:RDk E%/Uj=b࡙jp[㓍p 1n u+c{V+*dbC CAi@AIԂgDa1++`RBrARzϥUHƱ`c)U1H*E lBj[#g52.R [M숅;{?l |97 O~@ing8x︰:y&(D-Y po,΀b bŶ"Jq2 #}tҨXS"p[i=bwҩ׹y|<|9?k7ܘ Ɠ };~]`yMZVϬ7ܠt$9s[Fz,ճ8 ߷Ԛ^cg\2FGc//wn5Mz(A&ouKkE8"aqbuc9)|[牣5)7;iZec\s U DJPET9ehYm(ToR٧z^=98tqyj7:; Vs}O/?S)k ~zCɧ\nin2-I#wsU6 rjN{DImғ+{H7zNϣLϹKCh)Qt idbE頫ŃRaw R%bBh-Fݔ}c9:hs@>㤜]qmCf\t闣۟@A݉~ӣ~68t~lrvOҸ_卫tHv%#r3s@h kdۙ~VkugvXÆׇk5-7DO 811z RLvWi3tGo{GWg?+(a(&&81q2hmp ׂBw'$9VIyMtI[.x#NYH:"pؔCF͜-w}Fgs ecjk͒$W!/ҦNI|W7?b^*T4mqȝO+1kH5ǪVo rd,; !kjeJ5X®R2.ow-f/ِ~k0磩YX{m)g_[٧1h<7<@uf0M(W11{ֺsKو+}2 6fĪjI  sedmJfcmV MQ9Erv rzZs#&(U2=c4yeq4s爂-~#l= U3ZH߾qOx7@Tg)(&[-Wjm 61 +Ctj@0R&3kYʺu͛x-óԫu{- P$R0"ѷ Vߒ"[ΦR4F~F3ՋFu1kL4vUGc1چGL\*:2͜m:4,pyJ*JL|"ks[/jsry%=T՜AUaz`+|UoW/")  ܙi'CIVRPOP=C^[kt8pe{(p%֢VSf1!\РU3U3Uv\pլW,/+1ᰫf.UY}W ~D+n?nCyXϏ?[鮯třh?*{J;ohvqu}hy&g:R5E ЏO?(8 nl욵LkL#Wx@p2  \AUYi;+j=j:8j #\س?S{=~~qqyǿ}\MƟpczTE_3Oί8<Ր5)(JQv=֛[ 5\yuZ* D3%=GhFNBF֖C- ėKlOz'}rz'띥{VSWd+u4ו\WJs]i+u719Js]+u4ו\WJsbG1e]i̮4ו\WJs]i+uUOpG&h)!& ls4vD8[v8S"yj$gy,a(DPĿziUfˠu%WT^g6JbxTޙT@&!z:ej.HNr;"ph’wf3}0\^k;8}A-eʆ.yW,ooݜz|oKBJu:8jcFL$^asF䑲Hmw r1Y$Pd4`#qHKbqd/0fϺfVFyW6P)y sO)ISTΜL+0V GD KIp-Rv15Wٔ/_°M Uj* )&M5$}&RQT,Y!_Cl6fCd|ԁej104)}0\ѲQ%`-K ŤFq5?핕uEFڧJ9wJj|e~ouoĶZ`P)> 1F5ڬuWjVz2+Pf;kJc/ʪTUꥥ@j8rclbAEC )"FA \7%:xq.ABʄL%YLYVΑj{u3">]F6Zۚ88 ?7 {U>󢜖]߮т f[OEdy֋tO+Soߝ *򪍶侔gx2$v!TK*B{>CE_|e'+LѤ 9CgCh.(k8r`gRvQxI^0yHb*ua.dnjhY0XM :t$ةϞY˲}DftwԬZ]jj-AARRzٻ,5{fm/5{R3rRD:NCh@cJJ!&k䯀!YgD|L`Q(jjg UASg!VmVڢhl 2! oAlA%?\N.;auwnsCrur}̚kR{< ɜul]$.#E*e8](DGU+F+&CV&E"XX,B˜MEJP-A{o &g8UM*?9{< n̺y`{oXMnXS,+r?+K42?ѵ&`]֗=PM1XC"2 חo#}z,|yq*&Mp!JOx4U8S~ X SiT{Tyy w_"d,_Py (b0D[m:MTD%a"[aMe5fdI+0 M,x̨ᬝQ=ݍ!TPquXsowVzgϋ6%ܩNXy2o :!+[?o.fǾ bv'[pٙj&@>_H(6D粿tn=]@$ *5I1H`?^)( ײ A)BZeoe=閞s}8c؟r|G(KU{Ǔ ;OQ.圐|h8 ힺ -#VyĎDP(X! D3.1*/kMLhԵ-q^|\YNOX]}=G{*vyUDѵ{myP7[E:Xj,Q7*'r D9H4gDpDRF$bN0xy yO22$zb4$&%tEwh_ Fi5[5,lM3B2 wNs2>_ÆГI"Y /t&&MhD$:O)>'g"hjNІs*bsÓaΞ32lr^ ̂Nc6"sЦmңyh+.澠vkڱ-jQ[tY4 ")uTiԀb2ehM}"ri邮l2M:I]FR2ZTxԈ5wYd&(4 !I m]0N}2L bk-"Qu!H|rp ;[Rae&"pkw)DYIH?Ϻv]C(`&xǣ F1kVL&/4HFœ$HM㷒CFdouWD嫩2F#&76 63VKރ4%6Lk4=[~%_88e?n&6ˏ퇚iUƗF'F'*APx;|õd3#:0^*O94I]/0k: \B*2%OAhtX!5R+":\2-.^X**y{>F SA{6H.F痑=/Nɧxӻr%T#jDӓWNJC-Ш69IYtm}L>>[RwOUެxfďvݿfO~*^T_Y5 #-qt_4_Ɩخl @><sᆋ+AHMKږ˺fXm3J7,q!BJ̣f>ГetCֵ2y'ZV.!g|%?F㐫 ~^?Ǵ7M1pR:pW?wo.?~ޟQf^[i b8A?Y=I4?o~m޴m5͛F4iZjH]iteKrqCx۟%q3?-O^q?]ЃOPl\3*J&!D._B ui\3]ߍ>mkq)ןmlNz)+`8?y6w<;gucI3t& i@2hZY?v_FLe;u $rjC *gzU&SwNݷsQ.ɞG(^ACK (IA*GD0\ySoZ}P{֚Nlv;aXn)GQ{o8Zt⭞%*)8 ,fC5PvQJ9ۑ pq$qN8q3g'y?\=NJ- #upS` \C+֐,%%(}@p8Rv(pez*KMW/,WY\Ez*KiIW/4d+T{W\TE7+G "@g6a`Go�7MgEGsSFPs)ǂdKnOUK'zwKuM(`3>‎QAf,Was{~ s> Y u}U.B -:?zR4F;&~,:[F`_μ[d L(Yn{U^%XS ?U :⪖w-*a߾:)hH^z[$&ySTVT֒R %IxhoCYh,JhH9pVA*2R&`B',_nfw2u r1{ġ VKQc2$6p6>oϓCi %e)<ID>tv{|`oz3p}u%N^_->n -Bs(g9G"|&vjާ?d^ 1LG0 gd/Z Nn&*$̾x> ɴߟWWe6Mp} Oqb?͠q: 7̋ls~Z<ʫ;zGE\Տ@qP1,q5s%p~@!~(M/Pٰnmᴭ0ܚ6* * l,tiU\55ٹRrNi~l P3Sam?}g7~I"Nbf=6BtBp ơ5]]rz Sb=?ٝ(%s$ťP ,-{C6Kbt 2dTvljǶڱvljǶڱvlBǶڱv߱vljǶڱvljǶڱvljǶڱvljǶ9ujǶڱvljǶG6RIV||2ô7}!P NQ0%^ I}!4K!O zt T: UƏ(1drbȶD)m)}@%c‘QDlb:4~ЌRQQO4G j#);bPCp?{WFO,,; p|X`1 Y-9s~nɖ%2eVSlvSŪ㊆IeFsNzG TՆaso ;vjr;VX&k6q|y?PI0g^o]ui6MF'F'*}˫Ozk+_փ\;iIEjIos Zu 0_̅=}CCj>itwOs4] 1hal; lfҺ}^W=7,9dZx޻gFǣS>áꆎѧ.,,wSK8J~zzxtZvzN4?oCnMfe/j7Gb1՞Z鴹O >[ڭgP,Emd ٳm3y@tbo_Ό` yDFI! 4>1εK,5{2yMvq7;ɮyiYN%gZQ3'Hl!Pyr, ^d_=9}.w_NO5SqL "c .5wR2C' *t; &:뙮I1D%ȌtIk^~E1@Jb@q4kʊZm85̈́hi<;aO'J'{_܆fܣ6+ )` :͘*ZDL12);oZWEF6#Xkă@rV&4HOJL U g32*հd슅2 K9Wׅ?>4^OWAǟKݵW, $0yVV(;Ȋh ahF C"JFcUĖ s/ƞGdBhT0 ph3v \%`k< g3bL̾v5UeVj v4+,Ϝ!0QQI#Q5)3u!S6b*fm4Wd!3䊬(%I@4DSId>`"qdՆ[$FMN1` "V]TFD3  $O":X.a4Y{m љ}94 SJˤED-Vl,hgBdV&,iXЩՆ?L4Ձpq:\ꓯ슋P퀋.xMBsk R 2D3c,!zl&\ V]U [*wmކ9.$l.n~f=<ѭ%jӏe0ܴt!}=4i~9b/vs71w G}VԝSYgo1Ffc<[O;U+%1?`dkCM bV7|j/yl!+`OYY;o <@x { 1-LTEEJ6cR` :FϵĜPz·n5DHkU:%;aTdZ+J$2q@f"3sjՆe$ؐ]T@掚iX5<^^ۭO<;-UXѪ޻Ѧ/Ѯg *^{ Rz+tC>y+X9k<{BD(<`80MPD >Eq:gblc{ {&[(YǭDDeSfQ0 RhRN).pV + ՆFÀ#}xu27SbvQwb\ngJ5 =+g@׆<*zFfT\ZeNq#,1f&KI8 O)t"'= Z#NjzV"rp@$e$P,P: %:Z_M{xmUejBg5C!CMew6p^tAVK5@(]ЅYYWZlR;2*d8N-8Py0x-Gm]Vʼn_m.gzt\Xb$<HbrsTqV0k!Ͳ+^VƷlv;O;1g/nY|7N]2@G j_/gon!xs_}pxPa;am%N9r8"d$yK_ < Ol_Ӽ}Jν ox]TW2 .Dg!)Xizui5xtcU3Rɀ:gHx>P³։\IS2W|s㻉SƉN~LGwo~>C::5>L+jWϴ"`jGGߦܢ }Bj껹et6_e~,vp}ۆܞ~Og+].2ow@ֺy[AYh[zցLLE=ړO_&Qӆhj7FWţNic=9J䂖@t4IdrP.'32fs*׬^8u['\G9$g2Bj.Jc W8-XB#sڌMF}D񼼫9oZxwODMm? O|Tg7p2"*e,2ֳBZPDZxeM:n|]Z4~?י^)}_b<}ҀkƤg?'i?JhRߏҋvdIYQtY / -3{ロ8ue{" ɷ HYe⸺mٽDE0Cz#m 5zԕ\]OF| ᧣.+w m/`Dxt; r2!\̫ݿ0/Te^?nfdV.pB5铦הy!hY)M\Қa`)vRR-7e!2fASl$>hv%ǓĂf&&浅9d9)12}BBl>;L7*PF^LoGW:1(Ż&`M:HHeBi G6f@tG:V I2WkozBVٯX v/{Sk^G?mn&| 5έZC@M73a4!8O m^co׀'#zq&AD/BTKRYw$^$2pX;Jޤ0$UIZ#L#֤wUC'݁},~2gg,gLƱ U3da%Z rЉ3=Tի3t̑ӴȊ<&ϗ_S XR2#vocXJ ] A<Y8㌦4"6*@v^11'`'hc'9 _A-C#eiS&N&+NΕK &RgX"Qjͩwti:lˊpD"$2,JT–,\#VfH,!+aš)ͼ>}r,Y޽Nb}L>y IXs tLV.UW|\PX{^]Fn[_\GU ĪZXDj<_[hS̭wGet:} ~t.6>a ӏe[yJ[O*PjSa:Hz-̣nO`]NƿH}m7+&&^P?QJ ?h4KM$srUA I8(tE7(@'+CaӬC$WhHY;Nd9Q;kq4$eHcFn?p(^Iy+N-kϼ]5szKsME~;[4W. ߈}hܹyh)99ju w6kN̜ԨL!ϛĉ*\tpMZ?{ƍlr@*MuR[u.ɐck Iz!j(8!n4}@u2MٺX_b!ъKǚ I + d$t 1Hj%%Fۣfkoe*|Oz@?g yu̦mQ`oUgRFv'ƕѼQI뭳Jj#wPo>Uڲ!x5&I]2/_ĝ6>;Q3grxݿ 3>͓1zlBS'̡ԉ*u :Q/NLHhR;wጦy$wmyt^ 黎PZ**i+㤫x䡭"3oY멭ILg*xi"#c(Ε`1n>9wcYswv;iNٳe{-w҇w]v}|on`X.6oN>LnR* ̣0 M&Z7hp8,hr^DKf<ڜ]dJp 3J%. t&YM |6h͹ J8kp\S䬗{o@G$/r`DЖ:l]:žf 5wWp)+W;&rч4m_x}Z"p%jot=I#^BP -jSJ:!59)r{(7ZT~zS=y7X鰹U!׊C+ ʶpp=;\=`4AJŞDwF K]^f __Ӵw~?wC4?%|ӻ7LМG+RVҩRZR渀GFA4a]ij>1Ċk+iEGn[R~H,+m#ي:U r+S7ZpoY=.;.?LBY$k]FfA[Rgȍʙ_euҫefB0Ep-i2̈YdxGluOsU!y diNN(?]%{[yZGLr}I0WVo>ߠ_|^b:16K6.yӃnfmΞmLJRaVCWMV?AC.:vEo/oWӷW7Oi51h]z6͊ެfܺyN/o|voe\OW{Ǽ y3W4<q]j%oz5ßCf;OYh׶ QkMѡ.:`3BM^~DM&'rL:B-}M^m~x\s0vJtqW!ug0r:ͪPpQyk|60*۰wUal~Tٖ(.4)$rOCPZ[r`,K6pD4ʷƦ=K {?mIpb9` ;m0A"&!$X!F+Th;jdiKh?aGfDJzͅ (/ךа6FzAMUȊNn}!iO !<>d͟r4y{?lrHj1j 0+SKFaT`fID"UMIu!S6dQfY )iYQ_q1`Dl?ED0"[D\d}ȵ'%@JTEuN*%|*'RF',"Z2-% YiQ#g|aRIɄ@4G- ܇/5FzD,R{K呔KQG2g 8CȜ>rQD&$UKn <`49-,y5޵q$/f~? ='d^OgIJ~3$58Iiq8GU]82zfXVYPB s)r$[,'uFi_ϚN1A5Ȗ쵞p$+0VR` V+FV)S&gkO>֟`YRRYסEζ9CCz( ~|̄r.,HB_[+: V*Jt[l :1&iV) Qe7HW c[O ݼ;*t/+Fnrpwt &㇎EX,@=wgK p3-Tm;*XرH`eeC}nz))oVMYv#%;y` $!r_/=\J㰉~`~ovF)LiL - "P ^ʽC$'f;/\3^W`xN3"uGM?($qwNǝngTEkj;*AO4VTlYhKwyݖ#-(I*µ>^=ꄈӮWUiA{g/ǃ#C^{mҩ-a9Zʄ2a )j0A*1j 3ꄷşϠϏ[`ښPojŅe.Ff;0zK1ϡ){,1Ǭ=(HH'LyAN i1!0aI)#G:A0HHa䐤X2(B;N<29H(8-ngd4t_O1'><ƦK$40ԅ_cyy?r'>inE,ip\6{kn' D\y *$/;Q@_ I¨U1kQ6GMMCrJ^ 6Cp"JLaJdKU56Z0 #!GGSlk3~(& V,r* p* XEdU~Bi#Eƀ3|"yX*iIy^}1^69"Z8m&T KD/ +\p,]T:vB:D ׵N6z6n5c3Z+tо!zV[%P1 㞱Șeh@pC88:g4) Fgeɲ8MP?t,c2Cje)Ӗo~;FsX-QR"s^GL񘂣XHx4yw^;hM4$oy,Wm 7\{ Ů;ls$F."x0K;2ף9b (s`QBEM bn6;1hcGQ62zjǒ%%Zsf""2Z'((ʋ54AbD9s6O#VFudĉq9lVJHR蔵D h̍#., AiCy䬈<7˕>(^j:L j>=<L\<m_ g m:6d TյRq ) %Z!}6`WeO<8 $leOgTm5ؗe g:LVSu8K%`Ŕ Yqhreqc*h%{g:aFhNP838mJvhJ_,_=넬˻,|'BEP|`zPuܣ%*[ߪ,VI n2i]vp^wz*+{fmO דק \7J?Nĕu+,iR"(%d+ Rҳr^gW.VJ! `m0k4tq:plJYYu[]9{ )XJ۰vz;x_T>]3=^vqprW7exf~;*>0xAuw[8v݌\Uxs98:DM`Kr C/<"1NJf3KE$"_hY"M2f2~̆ST{ 7Y] waR{G)Ӵ;H̗zCe.: WXJ@LB /lPqxX?nԿ2: \+.rMMXY=R*"0AY` 4&f}g'-[^d*Nn|f⌯V>a1vuԸ8HIegdԍꧢږ+h㺸U \ P9 ˭.":F GI/ÉuU54vL`| r B( 3jg>NLj9DXp4~*OA3G5C5incBc֗t f^]?}E^zGz=eZ֮Զ`wewFBGTvG~ߥu}ohtum@ &6\" "Ŕ'OiM<[Vө 0 O,W5ْi-290IbHF5vIR\g*/XFO~ |u}s9ar}ۺmh[.e[2JR4xsM'x o 5јȰ%7iQh 5(ŸV8dY|z"B'3/ܲ'3GH I5S !2ƱXω(1тWV-unZSʧ46igË̛=iĉ+4uG{Ø`-\{]q5he7-̧{w<;B1q`i^pKҍm`ʑB SDQ0ׅ29>Uf?X~~g$# p"D;b+kL KaEVn14=89jOP`-x첱Yzh=?Txϯ*s]"c<V頼VY`"pP"E1rKA9R+XGY5i = -A(3*h hQ9b0XNPJ2+goY/@<6X];[z5}W}1%AHq320 $^Tbw|*s0*eo+] ,`Us--1* {\%zt`lQRk݆iv/~38o#pZf%o>Kߴƿm}\>OY^+poim!hFV YiԾ`7s7&2ܘ*xn8lʚPL,-yF:H*0փ[ÃЂOi-iBr=mK0[X I,jIk@O>eNIY|h٦.ܜ.0]_Mwg0gF,Ruktik#'ӗHF|0Kg7BaXXb)9sbIZN%) +bS31OѴ}*IQA)hkE:}5;˜,US_&|hk"/^L?bث8K=Bn:oK!|ZaL}i'U.iVâw@ؗí\mByEMrlBsFRЭ6xfs.$5/w|Y~z!E4+eh\.e6e)Is)}+yӾUqRP:ǺK{~m^k k%:Fuewv{Xbw1|Ev^ ~r mDP#ev9q)S/mH*D%n n'>qol%S"UgĭH*}IJW@bq<sŸ!5dFXycpD>A9qCC7|esr;s-&_o7l\#Yl'WRkP]h[&LѾ$ [c־l~c;:{gk ~(f<+q r{젼88j s]fd]Jz\*)5-D]tVdnxKYdJ.NnUK%hPjMk$2r)f @ٛv#sctrCV7;m5x.ELt~NkCwD*S dK R׵c'kMɺ23# Xb*fu )%H 9в=b.'$3q5sP 1ZZӨ|Ή-x&3j&Yti:=`W dClt)C y;L-|Jhdƈa4Bv=U7*)$(D!0+Y\GV1\ u@NQ^P%Sc).U-ѠP]t9i欪:^֨sԬF[ ޣȍK6J&u>p7NsbzrEt MUlElg7dXm#d"zSJ4rxz3xW4r(4M[( zNc8Fc#~;*p\ T]5qB`մKy y;@gih \=:kDC@q4`PnV:P 0}iW(PSW@HdWd"kxYQ@IEQ^ϼ&zt0+=t V5:   e͜ftYk|B 蘷6l=x_ZU/< d3zKnX3f%4' nj1DUzPIk!ԄKQ}o3xag y8{7`X*Ȏ]#D l3bBB&aƻG G]*8J@8d6}1[ژU&18tΣ'ic)|Y. Ao%JI :2Ї`],Nuc4o3*A 9}d@֨Vv<o 1t|t`uaV$ 8Ձ5;Rݙ:`V^֫C(NCOkonS|w}v)RurS}< \{#Aˈ~tC.Ax1+ 7DR&.R2j50 hcYۚB`@ؚbυ"“2x9vE1YLNx2 < @GpwF<כѷ a0,Ƣ t׈Ph! cK:QSZ(UebY= wv4YFgFi3jRZsCzk429Yn$l& P[/p7. SbZoĭ FSe؀q{yc;`Xsyͤ3Ln @\<hkT3 = 5`.ðۦh U-6QSS[s55ǤͨV4]:Ԇvu?I;Z1TDyð*apr70- FZW#(&#<^usj*=661m洁6[[Eio k֮TVm 5rYl:;0j BVc ږgAц z÷[mN?UW;5g4'G&(]d ips ^@A>jd3ed1Qi6|1,Q;Uv7Z ]Kv"rMb1'8U_aoN띫g.6`wmX pEdݞjl ߡ6 vtw7JVۏHE"-iHDZ$"HE"-iHDZ$"HE"-iHDZ$"HE"-iHDZ$"HE"-iHDZ$"HE"-iHDJ=#4 CDH蟺DLA$/D,DHt@D$: Ht@D$: Ht@D$: Ht@D$: Ht@D$: Ht@ЋA{N: |t@ =yԢz: K>D$: Ht@D$: Ht@D$: Ht@D$: Ht@D$: Ht@D$: Ht@/EHw_x)?rq}~kt~=BV*ʖ`0-YF9{>{N/M8Zl |]MQ;^g] /T%8w= n8.vA3/vǿ_ug-c R]ϫ݅?_~2>~'/v=ò;7VG3u0Ep NµHk]\Y^5[91%^s7DO  ^&ZNm_W· XigowΏq+;s=ۢGg'zx W/'x!OЕ%.O{{,Ŭ(AU zPIiW^nr䊽1r_O75~es8(x\;A8¾6/*NW5Dtu?܈˯m?X|?nsĔ-9 Vx"SO\xN.܃n'.? ubX\1n.z}rb,Vr%ZM)֪w1Rn!F[!s)㎜[3I|\w30fP?{Wȍp_69\| `=dXdvdɑ8+vdVKm 0㱚"쪇Ub /-}15@x(SRNӀ/"9J1J$;Ćxba'n\[SA+4I@'ixs${9bNTVj]=NcN&] 61>/Zz\՚Ԏ,Cf8r2h"$8$RU]1.į>gtg ǰSٖN?Ά /߾~>o]/pc8$A?_Ex7?ZRCx}%g.xq)9q+o=^qnͧ,L7Qls\Z9ލ!nd_5 ]T*Cp5!Su~U.CUcMlꐐ͆}d(7~`)ݺ|Rzeu4$  e/ gT]Syb%c ڲ;S7ǝEҟۨ H D3Ts/iZlp$蜲Bd3?YsU!i.'OlO-;ʜQ(]ݴZ,򙏇7{B(^AS9? (IA*G8Ô9xXyZwU\mdvLf=:\^_tb}뮮]דy'Z璶z$|[]m:ΐ R6viuUz{َNmvĠ լ;ĖZn}}ݝWzr;?L'--nϹOƩS]WMY^?5E~6Y}ĻYoqT.Ys1EӾbjv4a糕XBXl*ݞ [٬ >#ԥn-Udmά>m1ݨ ˈo>(]dguTT3^yk|b.e# JLMvdqrl&;cEKy]Rϩb\D UN^)( eYJ% 8-Rw՟L&{d%O$euǹt_WEY,w<`!Z ɵ,W gs9k3YTPK\*(-"9C1㓽d׵.;a9t?9*WSK6! bkS5 0N"cB9R qƾX c!;aAp4g~5$5{:gw0ͺiPhr;_8b|&DTTQ hZ`'kYTGElnx=,&W:(ifA'툱 NRfx6b#g7bVLcAbܱ/j¨-N}`qDR`Q\c)C %e/\)$%ꂦl2Ģx.x#@ 5C*r>5b d>c̗:<ԙnbڨCǾ #:! q5vݧ.)F@1cת3JQ<+ q5(AI6HbEɭa!^F:)9pVPH"&NhEŬbN Q>.T~7cOs,c4ȶ zhaα{;t7Ʀ)g `t{:q43( 14&#Ad"2/l|00$(] $jeeJ0yh1"QZZ:_i#y"qFYjAk$ 'gJ:rTubB9λ_j_+nso5v`O)ʹ%Ljc>"|jOPT-rnO@9 ׯBb+*IF i LK8AO'KC^sAR'fR$#1HƣttRIh$ y OV]eX=?bGA+q꥞Infh(5ZQg #S cu%HII="u]پG !+CzRTڲ.1'Tijw$_8KN!pɃG3nϢ, ()Ee!YݶǣΜ`?8#}Z-㐵^;-gS8o>ySD,NgSGV霝']SIa:Eq5JKbN]wuS}4Ϗla4 +2zArHZu1YnI5O'zM~Nmb8garu/ b~.&+lx~~SeV~/ڂ\QuH&m6`m!ʫG7ꞑ}}>}2<pAaƐ=t1b ~ߌ>\SRm>}ic]/i uDk[rtyW"j;uunTwqZ'0^8\bb?]f<|PT0j, uc{؎t g?L p=~lFh$W9:Wךa'mY95lfdɼSbf"J'wƾv[v%Zot:S_'$Zg^K*M +wKm_+Z: mauo'M4-mcx7;m`Γ)^ԬТPWVXq)-EvmQzU>U ҼOK԰uQkB9St0JRפT:8V bA{͈P!np ?s3o>܌d#qZϗ|g^q N3+X.xҞ2PZn Bk$B $Yp@4̫`(z^X1C$ d"0Z89=nfFu}Eo #V¿3%C&/a6ak,ﮒ=ĺ1> IQPcOTS+=CAAI0K I9PV6Q>piԚQ^ p$wS )LZʡG&JS)FNO rLb,n+!ZJr31&cA\5rGHTKIAe_PZ@hn>.8HB$Nj5JusIAKftr9 :[Sʽfjv7G%ce|Ir'A"FwDd(j(Tv!\yMJR84}.kwmtui[tr;P=XK9iqD)$W(Sx9s)j_N ]r |,8. B$!΢&t,C| Arkm#DzEȗ6 ,z=t:/|^ے#q/eI$%vaiJKyy%yu<;|= Â5:<7;q秜OZo`u9c@s6BPe(EO:@+:NĨD )z{ cÑim,-;k/إ(hbtؼy(4ydxq~օ&&$!$(f36wlqƊ{;ٴa+[ Ÿ,<P*S:J:K#J߆Y!:EB9)OS(bXrACx&,58 3ֆ)ц5F̜1"zh_ Z, u#RԬ~v /gו]^n=wG; =Bq ]-1Q$& NZXOnJ}7? ImM}盃^LU٢I*є rH*`E.x/Z6u3!;hrֳ|d*@`6seYZӠ8MQ1̜uDO̊O}1,sH.^^"ʮU ۆDZjPVCe3G3S8c.lGK̅ 0 (''NQkS t6K/8\*R*7؃`Y%l;ad`-SiTTƈ*TA4G\eD\K a#5`z3/k~~x79!_nT}rW. |}Ejeb|ýagZ8.4hY#1DZTY3],s*?o6޽w$A?%E<}#tz2kw2MY<|ʼn So7TfM ?S<:>0{Q2~g|5&=>?&ԨWVzx(7zI=썽kt7藇\^s6K(e( q0I 8kT#^K%JF0^CJHB@H YTT`pcQ{1#d">i!=!v-xEOc*NM&"l]nĘ`ۋEގnеN5Q0g NyflL>" IhT@>(-'C*@xd{lEIJZnzT%$]| TN--|R:`Ix2&93%+)JX#rIn5W>obsw6g}<(I3ut}_f[DxkYaFĝc!}uF9@n QOЌ?zZny_p}vͺ~Q,5TjmBHX4aYT#I&Zz$#Q>Y}lkVV| 6F'/Nb~~G ' }v} %?t S9T BLY*v$LdSP(Ka7y\0v3Q:xEKJP;W! Rr Vp)2vˊ1Ӆ8K%3שwTR}-"qW= IۄzXlG,(luǸf:OOsV^Vzfqrvv:0 "oNK4. l:nsrUʁA2|/zx8s͵;A…mrMP""KtY~퓅Ͳ3)8@:NݞyrWYϷu[MAIAm`'mJ+@[":td:etZ&_ҲElm{a%DozcY?:G7Q|5X"db3p"ĜdVccvuʖ<' )(HvB1+Q\ǼA$+W'_ɒ~%hR61s]իح窲BAL՝Ce)%.4\P%EIxf9qS6V7ID$Uö3 ڜ-NM9#!)<1p.x/Z6u3!;hrֳ|d*@`6seYZӠ8MQ1̜5&~y膞Yl|q9 Y{ mCU~"~P 2xuGxZAõ;b{yD  rIVթ\:Uz%r:K٘fuZOV?{bN]/NTia]Z[^9VH%RkB*u@P+hxU\0t6[%Uq*.^ūxU\5B=%Uq*.^ūxoHTqz:U\Wq*.^ūxU\FO,UɪRq]SJ?\~PJ?JNRZ=RZUJ'zʹ'–QE킐jƼD$O @xF"6 crSaS1=J=6 {-ݧ²M;H)$ C<@M#D9;}g\Sa<6OJ]6ٞ`eZKYTl~\h<B{SUTRʼ *"P&^' %7hDr֋[jmG6| :]g#V%˳m/1Dg81FKuMj=oSMȧݻI3!j cVK&II\e*n?"KZdohPe㧊G fƤ%SZ0Fސr) X!T H4ʎyg`=:/ÖײFvǕ5)f1J夢w21F}.pIJ\Gs?zX{Poy1p7F4'-y-H"Xr8Y2G{accӡ fJMѧRD4pOFΔ圵88$e1*&Pvtu?޾iĂ+e!JUQN^xaidYJ{9j@v l.qp&xv|vz~Mƥ|2chX_I/͕dcɈqo["4LIzwT҅%3{ݔJ{8bcޏw̳gKV0Y_ C0@INQ(\҃cqBFŸ_/9V$3(M% +i1s56i8rF^b>cM4p1ŞoM,].]H-a"sG_dʲ8vֺ 9+)񣽩0}¹45L?@y;_o|~z|2i[]HaW|5n&>M㞧Ii˓D]'Iu.[uc;g1,XfAzУ3O/N+*i}*:dSj4# _Pʽ?d7cX|EO ;O. v?G?|??}#ǿ|<;u-0N`m/2@>ǀ;Ϗ?hźz4>oha:V]zk[Mt'#tƂ;($}Z9KQ{/C6]o݃ue=~+Is/⃏$ p'IA㔄k%C֓@"šU2w -)R', QI&!sj$9&Eb,KaB)R;ňA *@[D/18I4QI`292;#vfΚ7\&OG:,5s$6IhZN!,]_-ʾ _bIr$a:>Pt^)/n鵵\iڨh7\V]J=4`v^KMdr9 =h~uk6$޺~N.|JlF wצٻy~^/)8??OiYg6L#Pʛ9u-BK{yZ2~oçޡa!Zjl؜Ih68 y3ALxNп};]wp8CIERpAbPl{\cHb&I~R,h]bahk?Vkar8l ڏIT:'8i8'!`ZO^ℨ -{6SO k]Pl(oJONsT1 7)kSoGݝ_1SzeR#YyL048bcb*S$t!^!r0t`#R 5KZGpĆ " !A6=r֎<~}݅J?NAwW]pctzRmrck ,]R''^mNW.X#j5\(Y3耧OfWZφ<ŭ82zfXVYB s)r$!O+ȓuㆉ/ȁ@p_9^ Kf"c%6Z`b4!0J1j 6QE8#^{M'I= υuU٦GCR"YJQۖ>M`U9-KmxCH9J(JC @%c>jܶJ)-gɍ`-_;#}^/=!+1q /fΞ'_c<_4 i?|*(.~4n6{N!U\sZCNٻAX^#>"Á^,NY SHZrD21O|ML]\Ps9]S7^B-rĆ2rZZ_/zb(˗7Eum)Q^?aE6&fԯՉtlU;ܵ9p9i`]0kGݐKҺr%˃uQ-mvŒwKԺ-EA׍ t#6sXJ4{:ZgY_Ш2+]$vMw;5O 3j$Dջ[ο'f2X*v{&L嬲jQiMLxr6-ؤ͐͡oɠټӮaMDawmU[Z;Sz:q.vF)LIL - `p*D< !i0{ID;D+n遅ш?_MKgI-s)J[|D"΃&Pj٭"v^@ D(GKׁ^+8Au6n7vzc!=JtҮ U;PTbzpR\`nZ=w܇e>yM\ &uD=J1EO{i{0;˜r%nXs&DGimJ̝f,JlJa@J#68R$#K=ARz\TkXek:UNZ=1h5r ٢6 Cc p]Q6y;te߆EQ׽60ߍjD Uvϋ?5;l=9\~2J9I ,JcҞ2(d)I^_}i^'i԰UQȑ~ѯ˩{+}~,֋4_7Di]힔^+k)L@R(MQp TQ͍QmQ'(?$GA?j*ƴqM mg¼ CBQi\ Sr4qKiD-W^7Q)dW ,KR \ࣁD.WZtpEcdA5pk} @_-~ (*)9KnKd~ ),+Lec%0&NK~4o#2 ۻxMp0-G`Yv *yq-{@Amg+YAN'~p ]Iol9;8C<߫r.:OG9*5jo=)o$דAq,1}QTW h~=ϮS%|M32}}-cIN߅o7u$l&jmIZ)h'} 1k$e0du69{Ky[^65#oǟ>شߎ#¼΁/3wx>3fh.lc"Rnީ08ll=.(RܥpfoC_=Xs)A6yDR$.? 1L I*ɕnU/^%4sR](յy3_'">[3&NNc E"ޠK)`CLO30ԽZ(85iatH83=' Ex(0y,W秱a C}vcr )W VM[9^DggM: gktLYأq?$q {l!yEJ}LYq{J҅_sw/ŕ ήΗpMbͧ(WRX%GY~~eFPL ڙ|iH4BO,@}s,: ? 鉶.\fNHY;+AgnֺgBLSz~d$xo_1K}|q`PU1 *'~+˙\;~O.ޞo._ L~v`fzt$ȝIe'.QXRbi&"DrثR[$+V@3>w^.>DɁ7`́h"^0sa0MouzPe@ߺK\|eKk]fuGkvUI7w٠祖a6oqu}8t~:ɛ- ޜ쾺>M]+*Us.Wܴ{ ??> {Ե82s;p'JCp2q;At' >5e55J Dbq=dԁ}lJ"[Rk%7BZ13dZHd%x͵֐C'b5@kS#&:̅xbWAxe<ޤ>xJ2[^JJX/ (,x&yjvdFNH_|RR.!9*U2 "7iFnD$j%#.@UĖNr6ɨhXfv,G05Yq:%/]M;EmU*iW-Ϝ;0Bxas|eyL,U2.DeS aƆ2y\N-a }%C#C,D(LhCY-/fgǶh*#9 WimLAJ1+e1 Q1qVm7F$X.* IJzРVKPVU1>mRMUG"ri:iɶ*=Wi&e*6:2$5Bg] b ЉEHpq/xXM;C_Ll,6q aяOhsz&(IC㎂~=n%n)Gvf3oƠ~'J$}Z  F-)%(B8QAk 2Ue B(%]R*f],iTވu32YC5q6 ŐU;)MY}9Ţ3KoJʣE M6}.~Ƙ) .Mg͒xpd b|lY̳.[:[N!vE{5>b&*r CkΙAȘA0ǢrF ܪ,MP3y܃Rh2N).rVIITjIx@s4|sfc{kZ&0,Q9`CM<}G \i&x9ϐ?E4YGdlU =O.N=s@/ yT NG "$hAqVh!Xb9HFR6YD\%3,LJ6T<ԑ&{lyVχz0~#VkP]'Ҫ[V* v7x^|%S d5fyRϓg=^2&qWr._~gy~㶔$~._'g)Ժܣ߅.͏ ݚv<߻ϧywa:;.f-%Gv^Po;o&Ѡ5J8|üF:K#?ףhہg6Z1Fo./`w.FfBCttLMZ4nA 8[28p+W5?иv6<Cn@(sSꃔ<( y X/\7gǷG R F--}2&:x<'Y*7Jz)Ac#Ιު/ء(h@/6?ۃ՜q3%l8Hi!SDL *ifP}_藆K.FhJ2J_:qJqJ'SjV7Jrwg&h.'gw& ӊt"+|vMFۭ+.|vw^_X mTHc{םQxBͼSHY u,x XZ.BX?l0lЏ܂泡sx0AjHJFA//RtG?wxɤ19ȐCi YJ2O% n¹p![MЫzES>ixw5x{Uكq( &x8$x^s~У_zSMwZ^^i:k~} /ڟҋɫe/ae( wգ\do(3|1?7+[i9%7cJ3E+a$k7i&nosm]_6}P `"!]B[ &O _W\v hjG:G_ƨxfNZ`WG]b%l2n0FJe?{#j{@R֗#rv-lpy6݁#^~gdB3Y/ tmCEHh5RMx!CJC2{m;k-l2JDs"]k4kl3J{\ζS!H8ODyhRe I xMLP ~hP!ipiܟObr!j3vJ8OS' jW-{96 rݻdwd. [v iyrپ {Pl1:*sky_Kܲ{o\𞉶%Ó>1:׋wHhᤁEv+@Z7HxcZ{N2a~G.wy^p-+aMit'IճLԝeg>O\3oYY.',uX}?Ɣsz (U  EN18B'.NzeJ@)8#3W=נo9W9 41t} sZwx4ym;&[a+Ric#" e+A6kf9y'Pԍ=M,{Ĭ"@~Qy 1 gdN]j+aUg 9 y]aQ$gĖ@/a>% 6O=X^wF ]$&mfÃ!=s˽ '9 ! nK^;%pQ7131I턶DWR4)' .\Pho9ĉyϠvjlȸ?uARܟR0q|b4$$-4g\RG괗:Hja> !5B&|ICfCjws:[h,y&! ں!qn1-Ѷ.G6sqRG^k@%v*F316JA79 XQGELHF- &Z6۔fʡ=p[2_!oX :,>r BŎz)G~ڼkQ2k%1A6Q΄k87<&|t&VQK~ HA*mIJRYxI@"8,vɻDcsAפ]/x4yqo2  isog,)LIg9[)*ƃ#)G ˅\т`B#giۗ`DGi &/O ^ ׾ފwǿ='7Dld~ԾP2|7wPp8Qe/ 2*xg Xq~I6*2>ɓekW&cL*;)-KZ`2RkN됤f8 ɓK1brIVƳ^v퇓,RmBͪx`KSU|:R.*=LU k]y.^Xht@f)91Ejȁi"I#}澔)i-@`cmGk؝mXJGJu Je-R- ~ &'(}dҎ1!# }jRcahcdvP: ,ߓ* lXBb1#ݯQ}% -644_UKqOs7Ewצۧr)xnʐ6.#;:;rʡP; 6 F5꡴Ө֢{;j%l?Σ8K-.]jhўtQ(dn^6Y3QP PS .(^Uk5 i>[5Y34Ɠ:P2)sL"X iA%4Ըq`|rܼ&}g?B|?*-`f}Y{/۶*HXGez e oz9v3/yژsKdT1^&Q`f<-;[Of}Y % '$ZAkm  #&if>E jDur$6 )guǎ+GROɩv~\Ci߿0ԇ"u ,JE0[/:A]f*u7%ɼ=F(`T3L;3a&ኰ)bc2)a4`T$$*VmZ<|2۪h{kv`shN64X7x>~]qw`OX>'`/ϬMzI7;bǽ`ST$I(`l"/}$]@ 0H'iG,"7fKlx3Дe(sJAsP(|׍A2g8^\ʦ 6A•:ݯ)ZQ偔ݗ\zެ, .ʉ7gJ SEA6Rkˤc)F"WGm٦Ru6і;ҽ,S:zU!b DA0/Ż ʨ-_\Ni/W\o,{k;Cy>4/'/-)"W łB$BEUr"@(0%Hو+owRyu-Z&Zz.&.tͮNո{oZ.F庮YZn끴=ItT &sTp(GRp^^X/?UL+I~$Rl9T{V( gy/35h֡1lC>f 2=gA%nfv%`5\d=+fvEṼ乞s:(3{'䗿b6h;cGX|M #Poܴ f2+r#\:RW`Q4k;\U+pJ k\U \Us \Uk}pV:1[+-Hh)U5ؚU5UmZI0[+f%?O>}9c|^\TOY'8wF`W<ԷtNC#@c Y `G+-=!-PV)].b&+M|'. %#L4=ȘFBW@Ɍ%bC//pwvŔ],Oy2zcA`^5P^_ޠ3,v?G\8hc{m_F_ - &NgqpHjy?[7p7.`\One^3QRˢCf(YIhbAt)+0lﱩF;*/'FZ~柒ڵ-EL'ʶP]RB]T&>Ɍ?Wi7T{&ܢ vMS?_?_|;_faH$nDi7r[Fy~#;t#~>;v㦭Ƶ4јۆ,m͋n.";gZ[m>^m7ҫeb^7Pu4l=&Z헎IhzIvj&F—g^ֳхXJ=D6KWB&THQ*A5NK 6S~|? ܝ{AoM ?Ḫ 7 8֑U`xs% $1Gl\1R&I %sFv}BNlVbw) &d/TT"MFʤ9W@8DHoLVi[( %oiC2&)JQ|;@cA%̜ײim`{XݕvZrj"$co˷t:}K{t/~zNb؋t 8(ZH:09R'y=†k^ϙU$mxTFhGAk6eIJ$t+KA)#%GU)QLF[ԼfXDX!T*G"&_|Cf 4nh; EѵI9ϡ\,CbcM7V o I=or =k+#YC4,B&Io S Fzb&a=xY'RV+^ostS@:S)H2Tk$%4sZY %FͦQu)Й9+ieP=bM~i 1Nj{t,P} zW O%eKbc'0 ORJ&isQ?o'`o!vY,F ~# :ʵx4]{Ů;oG}s }C&_RwξOgS*F )xjl&W,1l2*f?I;LxA6:Tj{G@dld[32$j0ٜHie"ʦS$:3=-GTdse{QIS!(gХkؙ9sDFf _ [۷zOh8Y=^|x{zN5bbU+ͧ-6*>l3z)ͺ(#dG1 U:X1tP<9x'&`٣` ְR%C٣ JP\F%YA) Ny&/cY$!R)$Sx(z-9Y]:晝99.]^'?]>].xI"~Bͪ8J KSU|:R.*=LU k]$ʅ-rW.&f6<+YJd )*hr`t̺~ڱ34c4!o5`Ro?l?z,fZ][o#7+y[b IY فAH[;H8SlI|iI۶<Ķ-XU$'Dvm:)-(6+,YL'^X݋-3CX{"| %)&xŀ ~ϷcZɱ ^(S1x2hQm󷪹YYD:Bi93y8J*2WDJl%}VwճjH!i&q6E =-+ '*Ld9sPY"kY#3vIv5jg>ovŚyϱxCQI4YW7J~1l3|꽐[xVmZKTj0:4UtJyXAGTwZlf'~O~AvL$25øS)M)Rp[\A$]pii#QBgj|^MI3[.76Ӧ4篎b^OB_>,(n`Pm0SV Ѿ!Jq_16(L_a=kw03i]2Ys.Ř`U<%ƭJA<$<*l*ړф8m6$h>OB}a?.Z++9;߳V69]Ķ]ɂUGRE 6 sy>0M`9Cdl&gU SIt",ю$I*o hrHm:ؒcV[%Ke K)R I)18t9"L̊\.!jr[~BO-u|sX dug^J{{|&sBl"p.yD9S`"B cJkkAth\zzy0q.Y6q~ёk\o,u[Ů/&i.Iѯ_ą. 3FDtRF'*O <ZYuo]Yf_x|& $X@O{I `x =!Zd9WޖnZ;"Fg _M6K| dR짂Uؗ?9 %w# --hhCؖΓQ$2Vh%W:YSXsO G!CqN=Igiu@:u'LeeDE`,PH_`LHɵHYgSQXyP1xD" ɒW!1L]qWZ\EӢEi8+s]O# {7LLtHD#zQU[}u)ű] ŏ&K+i Iӕ+Uk2z7iqwo~{I3:]ʏo/g7ެ;gw'j/0gn5k"Rϗs?A~kI1ؓ{hF4vus7 pTc`2@OW}/ Uͽ.צg5ڸLRGZHH}~MbirJ{%#~̏xo4_ cԨbCtycr/礎߿ϷߞxoOoҬYf)i+{n=tكuzV3!UU(.tx 1`+U&\4WF8U%"x.dI'2:g2$u@RgEpTIAfdR nOȼhG|1 br2`!8HLJ\f>wؙ t>bḵ3%<\}دyRb|X$MZ)wh\䋪5QY$,-M/fݛgua!]\YȍM WqVc_MVA\|BqtwEw_wons8=[Ѭ;+Zhn;-n6kܬPDs{˫wgEGü[Rp4薆uVg.SoHT^m8,`.Y󜧥?u0|uYȭ_*6wܹWDjI %_ ek!$iNjYT#I-0S?;r>mUR YgC\bQ.eֱWǖP[~ulɵYbD1jc\Z Μ [llB) 4:tƶ3 }HK-4 u]DJG.y2%#)+ijNjTwg.!bƉ#3%A^E1@Fbi:cm%vCMg4&?-$^[Hnz%B:ڭrzɫ**.+cA@R(cګȒY3Z[Hy]ÀUdd+Z,W zRт@欼I+3Tؚ8ۑ=_5,lM3B2  o,>.gPTo~q4.  F3GlcmlL$򬌒yQ6(rS,0r0$Qdyl^\ʑ`ϣ`2!EMM*hF84b;f]%)`=^-q#v q-<nM;Em2j;vŀsϕHg-j!ЖYZqG2D&ȴU6#AcMD0MxOiǾxZ=>xU,f;7m6kfm%EXG6-lzLrb0zzqϔAC.3M K!ņ]Yll*d:IB;`!dASu d+h,X&SΈn˥ ܏ +0%O>FwEZt'^3OĤI@̥'yAg(x٨qvg4c=!<*z D`2*.PقS\ "2I@gb ;iyZ@8ZFWoFPnEͽ,gLBFm&R>(eշfH+z@F_Mעfw?p3M9EmGv]e=݅l_jċ/n]X aGyf'KQ$5A1\\7&D`1Ӫ>[@Ld $FRD Idg2f'JzZBe2(?[Ӹڰͷ-2hr53W:"⢦ RLh<3..?ZV9 ~˓ET-\AR:I.nه }=&?'cٺ?; Gqt[;oFkjZX!.h5cpiXWTӛ[ :ȋ?hj^}74/wq#KGKR[}]Wܻء0/ &׬uM45Xo[\njH}eMt=֜-d+7NBmְƭ퉛F-<| 1hdk̾$4ϓ/9'˗q~I|'I|zY?2xr5=oJ5ʋ?oGt~a49)LtY CɟX_z Cnv[`{QlXm`֍,y-[9G$HDr]o"s3C8?j1$FE*J@q]iELEkX mG~QZm?^GE|e`Ӕb'YۛZ vtv]ao岧 -xu[4*se7Ӱ#e8Ӎ|'3f>ՙ`ZY4(VNfj.LҼ#hh)=;)bo9%5yJh^%e}BC+/qhv#F zR%\*i")*P&!!=䐳K= DDwP-pHPʹќo6_;GN RNI`vqw4}͌ k_t2<tDqd02III83ZRt{CEycD[" J`#ޢI.Rg:EKbLx-N-<~?y|3MRg/v~*!_%vS¯8zYj,Y.q-V@ JKܨ%ՒRѡoqI%j}[RvhVۓo,}ˤcp{9E3$/  r Oɪ伔&2&}˯ܷܖaS:/.V/:GGx:R3 DP sQc&ZBg)NFAxmud*A]qiQW0!ʢS\)ۚir]w5CyMv<*捩@TeXEBLuyq䌛Cⶔo^Jv8,5l'I`}tV/[lHNp-L%ON/M,X`FKt.vQlN( ^Em*pwxpVc( n.=]! cbl`]mEʒ}K+jajO0}D+G뫞Bv-.*7V },DJ˱̎` Jk&Z`sc0s t?f: :3%#SJAiꫭ<$m@x gD8X<AP8pۼAh1w89ܓu6} 9Bx$SR2 c$PS)Y8襤 3x2wCO4m2v%=Ni6|TNmKQ+2|M\+P@%hԈ,H9Yґ !"β6E\N Um$}@o.sD"NzO51jL&%110g\h8Y{l;yr>cO _XJ8OKxޒ3kSjtrp0J('^( vJi[mu;JɐD"3{ )S&GBHGG#SW߇u8|12tp~|wsCD'WJX *4P1 h^=17>%17bn qFD8"B$͗iQ QD !8|PXV#XL1)JP (&J@mrpkбS8>m>n'7wu{+&+\߭'j2%od *{:WUvm3o5UU'unvY\)x?V6UZ?)|| KWX.]5WWT:+\W߭f]!dՎͫ]5^_ysJ|ѻ>o2GôYP>Ee5zu<МvZtӀhwn2Oy:7 x곴1]u21J x+|ZؕM|cl=PXQz 猆_xe?78kY;,DUch1JMRWDA%y!xS'GmGҹ[wPgg=Lqѳ:h3r&0 fCR D R0nE݆$MjRyr^{<byG,KXM L-Xp$J`qHچMlN t *$Eqt0>0^!0%N(T QI)*`hHp(ji*# L*!M;q 0wN]MqJޟoB {;GcK4["&/Q*-rlO`AZ|b's2 < CR";Lg0PZkc^y@ ( H1Ir >hdG4<0O֏s?[kP q9z?pCC]3OVZItF T;N@-I'A`OvР]ifn?Y =j+QԶewǸ9ȖR$psӗE&HWNJg9J#w4ZԻ ,uELK H14kpaDNEY^PS҄dX^^Z)<*khҍ]ӨxdxNNiδpZNJivjOOs#~_UI51mIή\*#=YF9/4^9zo4aʳ U0r-]U'q1-;yzl2^}8;Z]Kr[vT|8"U՛! ֤X~u3kjfMfLztggP cD+~ߌam{+ret9 rex|\Ś tWllmUڨ*1 Wi6x[Wx7^׫RfJG Gpgv'adq1^ϪZxIo5tgUXo2^_=a.eJaR*s:<1y*o*르yq5^F/(Wu+`^UeT2qY3λ?:8#<Քϫo b=N [dh(@H@_2OU|9Vf|xn'hIImɥ8x4 ^})DRn\tUq}mTVFu\uv<76;~1zZTti\!_Rh}|S('R''#3!4+ Dr=\.>6e=CjN+/ҭlpr0'l6xBͼ&M<7-,bnztCSsK4]?wiw4bl Sʩ|N\iQP!!ImK5AZM;*>֖;ZP;vxN6B pђH3A'$EQ#2mb@z^>󸸾LFŷ ;U]AqP> 08P@ z,%\س V/ҟG~uo0BtP&k, ӃM3!c #CTMjivohRk+ZJD\kJ=ꗠ 1PKwQ Fx'-v!2[+0uC&<-seV],S_|:b !Vr(~+bA~]mva24Y2*vX͕'i_nod/dzݍ|_+bn5ꁉRX'AiHie3U+qmQIf[W+*+(HK`(Rm iKyRdݡT܋ˈjԽ;=vҌtM%aj}{T)SF?F;EqMFB)y>xʈNmr1^gա~0;wkb7LØ0(#7a(O|^"L umP:/LbDC&?mhdU%+j.X[lN~ V1 X T2晐n[R*Q Vu$`%( -xe0JRQkcu:h~ zsCyUWbfwiFOg>5Nw픻uss7Eo!G$UԀ @0^BG61*TΑ@Y=tf1x% &N/VO(mb!ARti#)" xӊFH/ \W5.9TE%1ybćP\R:ZΖ=V> J;OuRs쾦~D&iEc<* <9rqUury;591s$.:Oá*Uv'))H'0:4fg/*|7Fuj+>4,h9|G4郟gH@H]ԥ3[JJߢ07<}t%N떻WZZf#G1FOOgU:Lf]YuWׯj߅|W9_Un=^NgXR;?^2>E}JXELȪ.zjU87?*{ը6^}SA塋g8mͭwMyx?Jo^x74lw^]}u Xԅ1g Z\U;d^PҮxEB+˄7NO j6$gE՟hzCo}9&LJsrv9ZXtIdH*:V2+!EͰI^1kqaPX_2ޠc$bL(PB@4&㢚s0s>ZvxpXSIE8 'oxZIJsLIExjۤm &-2ty6=ms^) _)û n][_">E5K̊q.ZmzCmfmĆP2cfp'%a8ˤgvRR=@;tvBgh 4K/#G|3}1,ֱUie x:!X-#1HZ֙$x] j)VnˏX򗖪\|8H6y\0u)bڦ^U}B`mwpk<6h l-vpkSN= Մ2Z*NW%]!]1.{DW0U+u_*u(sŁ2`CzCWWh=y=2Jf:F2+׋qT^$o7ˑrzQXNڳ8.OYbW_~TuVjpB$/g<{B/a_/^ƈ[Qh)[U"sM1pqu5Ԭ6+-v\} R,-#ҁ3eb̩ʒ>ZiJenydɨ A BxQ.fW =v=v]t^PPd$F(B4LXؽ IWV}8wFծlTYΫf5+(pWoyM1.[6D(i*)[DQ`D~{ז} Vrڲ.-c%%w&]!`xo p^BTdT tut%{.W}Vvi*\:RBI'BU+zCWm*\b:@?tRh:]e|X:F2듺B/!Wڢ+yoKzF̆MoNt`}赫 -0tJֱv]vmz J #ʀ ]!\EL_*etQr5UNih kpyoUF tU4]!]qɀU|Jh:]ez+fE(1˸]y,Rפ E[t$+~ál@7KWJJ!9/TjC,tw.DH-;yb FGF\( Rx-&dd?!5YAPCrq ǣ@*ȭ9d}Vw~+4"1ҕTD#ʀ{DW$}Vɮ]!]eRU/?EUF論UF)uut]!`)CW.UF:OW]!]%ddo*޹dgF #T lA,`%ȖW'3Y{(Y/22,lOUz{]ÜQZO+TRw#WvzMwWӀ:ŻqaHˇ`"׭Q4.W.rWPRȕ#WBkڸXx4h}^Fy}!xIM8aXϘWg=%ö ]*94L%'F6WrjY'?JFHrq\;“к˕Pʕ~iqW(r%.WBprH+op}ɕz3\ mTs+P::BdȍT qNInaܕj\ %/+ЫS"o7hԁ2Wi@'MMtOJ rz+EL~ p4.W4jc++nozկ+*Es5e\劕 4\ aJpE6Ĺ(#EQ㍝AC-xY 6_= Z&!U׶'JZrvڭ5ov lF B}Pz# Xq+ rErIC@ @~K+n.N//ޡPC/?"/~7zqt?VcwJ'kOCyo|? c7ϓX[w_߬~j7wϭ>eVnV5g_|ų5tկihқΰAu=meO"~]^ַy BjWo$EJ.Vt}up>8O=FoxzohwWpŭrl̳k7|{G9>kL}ʹay;v?ex~}VHv &I4\GjBf"-vVMvq MFwRh\ [VQ67/T\ .}К&6,rur8T  n/F+ٔ/rurPgS)#W{fJȕq)S3np$\MvWhmr5rfk=A"WOz>jHaJp}i4_/WQXJ3sjC/O4wJ:Bb6 0r%ƍ"WBfﮄ2,rura{Mjw4ʆTzS jg}&lܸ23SY DS~!=|eet] VS\@k'4xG*a 9qBLF^{-v Ra\QdZhL _de:kTI֛#W˕f/W:Q'K68G4ǻR9fu{̏=Cn?zBoMyuw?|.(䷳{q^#p=ۃc}DOV <??jSt!^??ʏtqq EЈ | z-oo_mW 8'e1~n!(3t~K6_a#n˟(O>Kل_uw໘]_ ]~~~YޞlN]Jں[&P7dUY yn*SBROoכ;$ #>]}B{~:Fj?jU~˫7${R(j4-m<,;]*eQt6p=C>ꜲQ1՘BT)UbGrrT֝q`U{.~Nmf4*rg?-47]֩" (buTz`lҩSV>+kZ-hs2ĠHI[K!BuUZ0QӔF3j]q*"hY1woਸ਼R֖5;-)ETml&ڦLjD=FI#Sm;cYLf c)D3,fE {nB-^rJ|N G"Z!Q~&&R*;Ȇힽ|Kl4#tEtNZ1TPߥɏ xB. Ye9ރ%vwh0B!>:D!#N)'t?E*&S!e<:fDyJƜ 9X\cACO/Ƨ\]7"(UEyjk*uٖ:G $U19AzT9iR%x:*}avnE;I&mI!a##k$3Jh#*K}% T )B6W+r05R5[a1S׼XOA5)jNXaPw"rn%Xt^{V)D#;=5$}m.50(M#_f< G Um rg#0ؽjTx^ 4sx9nGDb0p&ܪCZ]ć ̪tXBsGW6XLĵeݫB̆f ƺz{EK052P39/f+,YߘuȆIpѺ J(ڑkj$*Z &K( O-[L +5 2c!]AАV5p.H>a!VУ*(k@o45~+.Q\az@edVq _G-V8!خVaAnU{CZq3Fnͷ6@Q}2d("2m؍rtg-!3֜nŜN0t`a.! SL3( Cl'T 0~3XGfؕ.NVajoSCA̖)riU B!Rl ej@!NMFA5uu2RJeFT{u]0%WT؞3/#ɧz ,$dgԠVh8Z6'l-@e"b $#u'auh#ǻ1ZIۍ །A5}/f%UbMDrbx>$<vNJ&D̿h!0bg7'M:e͓\Ÿk{WՂY-oCja&a-7 .mFtdU8aJrREbX|LECNaGe :#.3(Z Vz$ Ljyu,CلMktq;ꀬ4/!A Oݗue *1[.H#38xGC^,TGP[DbYjF{¶e !v*"};ݏO7yߝDټ ȓQ>BOE&X90;"T4XBh;KSv{{؋9(P"Q/ѡB-10 {n;X XiLEQ,Ck,a3m>%TF;sID+y r@W01^aB ֈi 3ڄ`1b#hXfj q uB A1.'Jl[b~@JVHڳ&&GhX+H*xnQ@JM++_UjнQEx[aon%Q"`?UnXI0`$_ ڪh\1+%I*&]>|5Ie\W n=ún5'f Kacx0Dw-,F ݚZSJP'ՓGCoׄ fL=]$Ѱj3bO:Vd &% xKd]`[SP/O(7"fh8(=uR"˕,;d*#o , &lA=-lg$J"4M'&ݧN[uOzHT`z%> PY π;`X6VA+DMg ADQuLɉKjc ' C~FhX c%, cXɅOBBUD :Dp,m5dxГ(AX45)-1?{" b7Zm.:8pϢ.5@m`tDrQR}pD$8!o{P`yJ*QVW8D4J#x"?@azUҠhG,pxA?Е+ ϲe?U 77~)#o%\0 W62NIƓZ7K@$,7 n 7On0(ܟ>fEX_~LibYP!4r=z&Stz;L_^Y͗/i鏔LʝIvi_6ǫgIU:u0ls4ݻAXu89O@߸ ohfd$bW^  KTlznw.팻=- PDY |T}@B> }@B> }@B> }@B> }@B> }@Bt> ňk1+> (pʩ%m7>3 J5> }@B> }@B> }@B> }@B> }@B> }@g@լK> sPk:sՄPR<,}@B> }@B> }@B> }@B> }@B> }@B> }@BйX%ZiN(E9Ԣ}@B> }@B> }@B> }@B> }@B> }@Bt.>ljƼxޛÒjZJ]'קfj<PR̲{at7aaK%,ewlKeLm Z}% !ږ¶5ov3tUj*h=u*(-9ҕ1.0g13C:]#]!]YcXw',p ]ZNȩUAI wCWU//H?>]mRJKWۡeGrg+JnAWjתVo:/]`Edg+tUz骠l캆tu>tŬKtUkڮUAk٩9RU97qG6x;*/Ch׻a5K>q ¥?Y \.Rdnt*:alh??*nd6]2o܋qeJk~M>ooo˝̛w^vo4c{@**.8}G4Y*ݕ5ς?p(jdp*;oU/CӬs4g%hVabGWFw?K#T78F+Z(5WX善|QU.L:C.]邖S邲4}>4].]ZU+:Uʓ)(C:CD UQW ]i9 utev `up̜]AN~lx ·,Uܡ%ng CO OW7RzuAht`Kj;j;Hv(%=-R[ЕBڵ)5Rt \ͻBWNWR6+ƨCt 5ˮUAkD ]!]q& 3+^'#yWۡ骠 JX:da[OF1@S_|TYe"!m!~TU a p4ʣSXMBd9ݪs&v(Ϗʥn`t٣=.M62'QNKo>тy^א^4>G>MVQƱɶdgeQK&X%8RTd*q|J^˚-hI9pVU\1b1-"%Xc NU>7*ȴGtpRcQ)-GF괦L:cQMs(#QbwԂEO#{T\q~E9Ԗ 7~(6fPkܗ~|o4\/n(w<5dWw/[~9~v Jg/Wd_Fw_K\)v7t_)n/WOx2+tx7?gv4Ó*1/O%,/^? QjztWx5.>qVʞhHl{l8f7͂`,(YϐgF倚^uMU-MfdYYp\uI92OwKP@fN.וG8襰x*+ V}ȹqю5s9DŪ2d M(yMw٧(*zm㧙[Ќ%!vyjU GͮZcg~3w]GYxLv48r]6 th+ujL"Ě} Fe Oi؛ue#B,S_)^z뿆\ _w\<[y?8}6HHv.9ەp nپ&z h״ֱVv(W Sޝ\W*ʜzRAi*az4-/{< ҸrլMJITeia+5#BLrstx RZBс_v²9?fm75VA_@9ͬLȂ RBiO Ll-ȝ Bk%)D~B$`me3lf^rrm4DJrL"! e#dH"C#yyZ> &0dWA?|ia]#zyqb~y탠nWgO7W΋M (iJgmXZ| A%͏X_ZT֋u)HRV)}ZcdlP2GUAK VIy瞍(,'8kRJ"0L2Axgˤbbj 4jKs$Jkp^XRO1gSR% PAeM9Sij͌ =&Qg^I#}XZVxBAƅMj,^<3GZIrqdU\͂[KUv->ޣTmC꺀YօAz$j)`)lˇTxJ*)?Ls*V;ȕuTBYd\啐Ȥ9eb19ƙgz8_!gW?xl@!U"R Op(QD v4 ց j>quW$ > Ӊ%H+"g<Ώ&[iS\s1M^7Ny!ՇqIʧ}_X@(t{i!oD([+PJ.JPԫ"ʵwl_e(ۖP e#DH;i( x'\jgmCdMPʹќ++]d6P6'~g^=MAN%ܹǓdž]?Q{Ce.g,$>wy6EbEiu0i#SX:b(8t%ZRThD"nKA lh4ELh)c+nQ'tYR䬗z Ǔ2v.+5[nSF ,_Ȳ!CA5e%ġC P2RtPv(sA}V 3/Ծ-;+QND(uU<:R6+k6FQJ+l70*Fپ¶Z¶+,r΢wQ`ֱ=Z()YH+e"΀6s}᮰Or?kw<=,_w79a 0!8c &1Gaěh ap0tk#e@W&Ho_3xuUM%KG~ՎFkuݵYjfvKj{!WPR.UiݶM )@.^TD>72޹**I HQy7U+;TR;'xCJYxYV=>rJ{^v* T ٯyĽF PI+x%TZTL - 8锥Žcu QFi29csBB01,\ {1ˊ !F4NS 8A :g@2Y=z!QY-x S?/#[YCn:^UoxޜT^Ƌ*g=fV%mv.Уe=[͵\I4ά;VcCQ?u5»wWW9A߭^z@;[7v>7Kpj~ȡͯI}Mx~4LȽ }tMQ+ްn4\ &8D-Fr )Q&oI_!Z橏¡k-r28}ۥY/lRWA4N OJegk圸l|qa9\wSChEZio$! O<#%"(E",&cU ӢȨ4s$VZDBOyd 6[ԯMR9둱R qƶX c!pXx4*k-賋:ScY*;> Dx︴:y&DX4Ʋ|,PMAlҒ\D8AΆ45O!{6fqa+J8%0 mGILp<t Yn4 &澠v1Eajw*wHJ-,M3LiiPEP)JDny ) e[&z%h!fHZўG@ ٣"^oX ģ6UxXİ c[DTQu!*GUY⭱V*ŝ7*<  7 F3.{DTQQQ{4-Ps?͠K?GtOb\-.¸.vyR>1Iu0N d"\[\> .pq_wl0nEvV0 0UAp]D?*`- HOCǨ4J*A@VZXQ;r==?ǣKA+3Vx {<#d c,FF3} 47"NF-X)yVal0֛>%h| nuѶY}sWڱ1f Bn)&3h 4z$m-攤N4|*P;D "d^nظ^\ T r6$22% 1|ɜJwFBw;)4#FsƳoWG;Ǩ,q2EYU>m7+9wJP(${sd;h6e܌7[2y^(*BQJu)?D hA<砞_QT) @RQ x~E9 ܛgM8г9}O }^[vjE*7XgjRi)kzëD8O">Ip_Z&vzh4+RϔQw`,=4WŇjؿg7;V,UИՋk8PXjtIc} ͍;&Iqg_,F7H o3bؤY{a杲.'F&צJm>w$ڼ18Ʌ(-\&3cMP`5;RCEC//Б`x0'\uH!ل?82:ђH3A'EQ@ L,wW2tVMxo2KơW.=Vs:8]~PiI] eS) +>,=qr|e)P!ܙJK*B`R "Yˑ&'w/Wg\"xB~}x= `e#[-uR!2R["O@tỒfbI'JܯXD(|)c$pDCwxis: H]CS0$.U 7o|o7Rpx) V\jH/o֊s+^KUΈ 7ߛ!7J JNE] L(16L'Υ52* NFWeGgh@E y4!FI!@)DD`!PM X32nse19䥅u<.# OB0)%~a$ *BDW&!ǎ_CqsmMIRi&"Th*5p̂⨽\RD4u|R'HmsNJm G>Hk,$QfdRY|e^eRn>(? @xoUf#ޣ5#r/{OILqIzN,Gt囲0Lǿd̴^"^ŁkZhrVQ TP'(E57k\?óEM 0!GS;{c[IJ毚j$:99/UWg@6ʑ.r>GG}hV9GޝgjnnF|Y~]y͢ri8ܮA|ܮ킣Ym89Û+AHL֙\>E4uڧYd(!G5F U;h8/vEréY8YY=!fm{VP`n!g\FoC_MR/{ďfgp ՜T!O<,<=#;oO}o_wߟ)e_ߜ5:7V$$Sx0wo~|jIKMͧZl2,͂o0"%et[ ھ[k<~= >GI\Cӟ|tkUJWaGlZ'>o҄["P((A]ӏ g _A:LDYMsDDwmmK>b3VߪJIlNNp{60ji% %R4)E ERgHVt}Uu]c PXc{1i]:xUJ&L!Q'kd%XHހ xT!Uo?z8TUgkbG\zCwvb/ઊԾUvXJ-{Ӂ+w/1OW lŖ~:]ہ{I-r+պ^ 0{WXnoՏz bi]*%!\)p lUUĮKIBp JX`TW,. )bJiW8SktW>?KaqkB'+rLgwW+}O['u֤e{,o+-,<6_fP3~= .NIճɝPN]/<~ NC#@ƮnȺ蚶ϼ+-"#y롥clxqÛws6_6N*kh]x TZSƸ1\mď`ai{ת[xܭV,+bŵV-:?kB[e]mBknwֳ[gv`\+Zww.r| kD)0!1QˢCvFo1%+0M,hA.e9>-1OE)z}7<{ij,޽O( .]#6s`0n73D=llI*L}QB~ [p}Z);V6:l<,s[`mSso}m):):aZ4Io֌гtK}0H'Vl3?|;3vdxK<pPjo+#O=HAV@˻tf6R#~0\|r%%obp+5%Z~ΫR(ذa֒~=z :Yk4Fbr!E'G([Fe:KLFAˊ<|QRk&ki1^hL6ӳ{~B,h]D6kx> e [S!Di>8a/!'0:(zL]HyW櫭Fk+ᶈۖL7-h&"+&RH{uZ+hޝ)Gc415 \[1l2ff:wő7k؆z%zKTtPDL-9lq@&]^%j0dFk/T6V?|s ∊,N,OT" cQ!8uBv&݄C*(R-M/z؆?};\ vz5&=4f:-6ZOc=ڈ.ؘ&HTJ5=!lڝ䙴kflCdg5l b)5)!C٣I3XҒǠ'*2SI+ƈ1,C)Jd JCR7(եcٙ8+8Ǹ3M ?bύҶ3o/ 2-'MUՑ9yS.*NU zMbz% %QMTqڗ*wS>9p4,yZPyظUc&DCMus cN[ݛ<0;n9}6!Ttd>n) &d/kuA5-DKhsޘmEQ8T|IrF #OgX(&6csRc33q6O߯4\C(IbEh!!V䋒6:0 H7Z K\SrYeEJdlI{EĊȸ@B NZAExp{ f+7ac(X{: bDI`5}!S--{L)֫n֚4D 6V(UO:.Nd̤Sultl=bw=q]{+5[];Ӌ-^85['s`џYdoo !8)|2DzàZ_O.ɟmʋV%QEA$5Z$m. {Dy'퉤 LF -w7=tvg9;͖lw>1ш듔e`I:hjb#LRc0*I2`͐>q +!eWl @Y`6ډ_LMjyYdRlY.l-@#y>86|j.E34_'_]aI.ߎ<hJd4Iۀ1 M䥲2ؾafA"o>nHBod3c%gmqrNײ-O[r sK Y'%[iODTT/4Z;Zx{Ҏ1!# }R}&v3icdvP: ,ߓ* lXBb15S_(ΖYiD3[mnI[>;R]y(0;7Ǟmi9o|MZԌk36 M4@tbvQ|d&C^6Y2U2QP PS .cϒi_SC/ϖ0G 1yp֢5B$LʬӥH0E'BZDtI^+ S{#DT&{qe=]"f]?df ;|OD@ e6;>oޯ3< B!D$X"h!r2{$ 'Hۢ>ػGd K+*һ JM gRn)X>O](Ѐ͡1AF*"LJ8j+]so?Hzyu?B](21`[ ׄ*Ƙ%@).@FH^yޛĮ>;snJdLk8=gsn?7_gu'yt.}SȺH}:G /y, >AqC6u-Oq =0lB k%%(YbBIfaPNI ">B߳$mFG|<[UOo- Jw1%ft1[2Km*YOAHcIƢOlt:`SqzxˣCR!!|tQ$p%S` iQ⢬1=-ZFRꑧ7N{#tvf`ȱZ@tvTQJtVzBYY'xaFIS^Ed0.}7~ۈҷ:O- P]B݋umCȎR ^xra<ɗOƝ#  2v萭HC IU}) B`5=Ɨ=s_Z,?>WJ?^f/ۏ?f8]yOB?~]m5Fˇl{_/ Gg~W_%ðJ$5'1NcNP#j D }zdL}G؊1xsuu^ >!kmH_e62rɮMrA`C "_ % IQMzMU5񟬅vZ)G5侠fIŤمWz9>!/=W?㾝s28[`)6QEL=Ζ-ů`rj8|3CrE5>oQ؆(m:ݛT9$dQ>l5;%|6c@$qXC_>l۸oeȃPho\9R O7=>H屢&WQͫ3]kݠo_ ~jE+}@ֻTQ2+p.&y&_oMߑNi>h̃Wt`ncFٿ4Arܬ~]/Y=;M̹S'a+]G&\G$r@kIz񯒁CFd™nNH j)c8!a{c9Cn5 ,QXA/FΆɛ >ߣHgAhŝo|.wJ;CcY]ỘrO4:QI v읈󂹖Lqfrkhc/sޯ* .~7ֱE*2%OAhtyȋdMFBf B7cPȸwgLNyla(;K")@x(SRR 1P$4T)^)~(?iABI:k͵QFpnnM & <+'WT9Bi>jg7#ڸ~ij.u61r}g`ڽjo9Fm4wna=$]IȻW:[.3w|޹ufyq4n0 w0b>ǣeӫ5gيSm8띫2z&VWi~; Kd'!O}1EJU?Ǵ&nNj8[1B,巯__8:}S˿<}:ŐL/"@<[K叛/-iqnin&K˒w= U-{K|4vgk#@O/F`re>y_:]hҕztAQ iS4TUy% )85}y#pC 7 $QPyDB%N& QZ q0n}:yUEs]0-0Q )\Jϛwr:=GV??}5 VJp6lؓL%OgBI;f*8e2I1v LJkH8?eE:xӥHl;qc( Dx0HßBV B2 TCdռo%'.7"]0BsX 5wqTp\v|DPx莫jӸ go:}l~?z\~ af;[?l)^n?Ia½kμ88hO\a{p=N$UqlPK t`2Kzef PZZ(ktyY|1ҼvOc%jز(52^O 5_.`/ǿ<Iĩs7m`*!+H q.{o5/kj栗}SͲcK=挶\DhR*RP]$LRdg2PF,7J5%A x||7}zk J4 0֭y[8b2A)b6['Ė]EYKmE-ՎiJq/^v ܣdO(֠I DTDQFؔwkZCr3HV1$,.H0F]ks[+{x5Э|ؚlMmUJ2) Ȍi=T{|ؒ,>,))^qOSXWص)+EO`Taoko^!a/Jيݺ{o]\1[E)u=5U5ݛw#U}ZYw 'ֺ"1#_}&j=6ˋT|eּ?,r^BJ!h @9ϖ Sd@ɑά[삨@gΤyhױѝygP+bzs.9s[m$,w˼ nm cۥ]/lc^G)U4Ab6b=0XX9rh5emj *4*NIlB.EgZ( ҕIqon1s rH߸)`VBg_gxDlXd!$WT\K)~k B1Ti}/.W+A\T)k1kJ2 ICJ 5JiV:Q>ڮ cpƜm WkRX b1*ILMOe/t bG; ,({Mٰojg:&_Ŵr[K!9W?q*Xk]\V4$H4TMܤPp|>_XG疎A!/CXBGMĎt\Be _> ={Za/Gs_?k5kٮwFrngÏT  Y#u.Mw5n棐'dY?mաޫmWC Mu7D.4̠L>\ǻwCqH1 J*(M>dV4 DDjGL"ycZ(tHqkoiW2g۫ʭ*d_egMO'`Û= [6ײ ,<>1{/=Rͪ냏CMU N !e-&gjM=2vL5{T3sҚ r0^ƔFoR1լۛj3Vv ;rfi[\C3tW:o_/zzgYJzjXg'L0a1)juTX4(Les'Rp|,$#ź GNZF1Uk8-A⺮xhڡV>Ţxzb,ɳĠY) f( 8 TC.ɥ$%Jʅ`vŖ*p חiS)fpV@ܐM"j}JTyv1x:1d%bZg#)*JG]m*yWȖ@(1++Ի \NЭmNU[HRiBc0Bi\aLՁ"!CB VK@(F :* dG2@c@!/ab1]Fqy8,eu>:^>X{NqZvSՙ`=bn#;z&vdm? f*_-RN%̾xy ;tRfF0{5Vtž"LѕI[.S))G*f椫xRyaMWfJP *Le&Nijl U:Fc>$;3n1b/eɐ&q$9=rN3/VښǰwGQ5g>\]CVy.-]/wݻ]LԑO΃8VJg.?9b#Bnd/zoEt_mc,~:Igsޡ\wiz?S/RvVEʯouuş{e_S?i~,tK@{u7~\_5U,vMқ)_p'gDy^-m͟7oy2|X' ^05Š;$4Ģ**{7ff>x}@@d76Z* Ik>J;e T 64]5}H1Џ$~= 'iIV8O產1Eh U=]$1HM}T 'd(*Zy Mֹ$FO)S^nrMhi6p'Y'wbC{[YgQhog5Qt}9;6ȣSRB }+B.`"D{tUR)9 ۷FPkJB)r\)VCa¯\Rpvٍtn+Xh;cpXxiMckw3"GDܴ$6! ZrZq6#xoS&_i.:JhS 8596pVSA[Ԇ3YW:SVFLb!o*JQqj k3Vc:#aû119)n~|Grm&w4\e̱rm=RmƏ\RRo)X߿1ţX9)$ yS9e搫}=iܯTM;9^lكeSJ(%49gVicmtŸGE|S:|rÁ}L'i/oG-|O\*ԡZAu.1mbcjBX#"Z^.F`x`H|rB Op`U'UXCR*hRT<ۭq80߼ZоwWd;.Zw6i]dO:C( B HI~_M[V6. F<ׯy3"KC#B lbY1N\E 1*VZ0Jժ\Iyz*0m^ s t 8^=3rMבPDl1>(YbcŪ 8ĶY]jfw=C#CEg7Ic}Rolyra)a\ydx/N @F)'{] ΰȥ d ƿI:*hRNJ?P}{bCЦ= *(AwG#i5a|s3P0% `p[cPkϯ'ӓ\gOm yYse~fvSILC֡idA M?ϔ/Sǹ..}N!LѮn!bdfg>m*v_'-$JfWu,YPwR'\rK|,d}2im6_ȋN8XRgJ NWܓOz (E ym)̣[[x?i m1 saR=mOy{]$Ϭg]/[<}]Նbfޕtx._M1<ܜOAݏφzzCXC{(+޷tfwKS{O_6ٮD5S!2>v&mx7]bԯdYab.x=u@Ddb ( #n !^)!Qr>Ddŏ$Gz >L (JB`P#"/peuFu2Cm{<Iad"[ 962Ud"9D0 4J"^(3 ,j0%K:Q V;|KH|1RH fs HQȻ Mv>Y30r;㟘׍ڹ͋LIחȏ7CTKvz{xn^~h7\|g\(JTPι1YG]J0əL'ΥGdMl|B{7\pjIN6qY DD3%#SJAi4<$m@x %"ZT"K8D *C!N%'@^[X׊|} 9Bx$*SR21P$TJ"24~*?ҴV._LpxR3B &)hg:A(H8AR9! @,%"G>pVMQe)EYd^eB}_z㒿!T)‹-M&e_\q1\e)#jTCYFOfT|wLTMɫ3. |,dr@P` ;9W\΅ q߱QrhJ{MՒD YG'$:s}Ql!ҁs$785KϩƁRp2$*ν'@8"sAPziT畕{;~*FF΁uAD TH,&@X\zqowcgN\ZܬOomgfʍg:0gZ<ä<ä;hm(^SI(^͓冼nQ'Q _Ө|'ч6N=EH+$QW\E]ejAT yPWߠb=RW`-F]erurE6*SA]}+a#u27 E]!{2շ ތYjڋ72Izlɦmj ^4Խ?~4MSo . Ow:N"繍s@U\䃱Yѧ5~\kJYg˹_緧GIP&YRs$C#h6:;B r<8v>±M6c^޹2J$F (̊hX*/ļ8A/NG:9TN$”ᓼ؊{ m{7A KH!)pY09 rϘۓB~,SWnY;)󪭼AMJ'^7~(I",)2ZA 2;VZ GH 6ɡ@ Cj)!2B7䄦2|o2Ͻxm2xU\1thJj4+OU{r,SJvEuTwoP|*7]}QfKY|]]qQM '%Jk$[d g0ѼM`ܟ^.'GM,mvl#rPyFbŭjWQ֜ۆ5;$}H5wL>-滬#0V':d<<e[_)9!F9H`$VKNn|w1}VB!6 w6˩Ky-ܜ4s;"me}b'hll!5g/%b.!,+>9_ T&fj5s]wnMw|ғ$ ktdԬAm#5Rg-ךROmZ$с>ipTȬӮ1Ir*svy6sƧB\D#W>_~1>gC!V񙝢X6rqMS?L [Y~BlxSgɍ"QM?bspCN&!t2ZEP B|$2?5kSZIw~:S9x~CTcԷIq/B*t]ۀ~\ٜ;j]Qm$"*NmRlTd.zkf`RV{3L &r0%#vTa҄dQjngCdL㹅exq.|Rl>jP唈 ^V0|JAG1݌]xsP)ş:V:V oduڥ}p(!DzGυV()5hXC#w&@()Q4+ "ER<|Ky#}fSyMZXْc[_&;i@f_.}=l+{%S M;ºmp\Ⰴ^_\`%,x#K5/W:XIv`ml-<FbC/zg{HH5QT^p ZS4ZZ>Qy5[2 Aznl $ 26 L$Q9/SRXޯyv5WFLcx9Bμ?ld Zpx'JՋy%Ӈ*7}UxY@K׃W~ O=n J TҖ%REZ[aKP0Of 3h.ZOmG6ot戭2^gU"R[U Ci`; xi?-%B5Fspj!2`)kL= 1q]Ը2LM%hhhK[iBl5k p+0S Waݮ:tu-hPڛz{iSo|1ԭ\F1N; R&iC#) Qюj\m JG&a#i9XMr=)ZcKnQ'e n%xp0y{{+__mg/,F4ـ)*P8 WB+l~TTVB9orʬdqTqӍ\k~ :Bn#REvV`B|)CFZR<:ȁGKT#?F]3ahi#'[Y.( &yaKUy)MdM:`#|Fԭp0n~-.V+/;"̟`D3 DP sQc&ZBg)NFAxmժSKH! ECq&P82ȝ s,P mDžxfuek+,s6%F9{W/hТm(iעM#kD% .Io޾E]8=B6D7'Y"Xp{RmĠ%ƔX*2$!&F4ϛ<7fq ~x4{ad"[ Χ|*2B[T Hh<9EP4 g^,X`FKt^V#%h+jcQiYr. HQh84C 'Z#gMw(U|)VдW.Cwn&|WO6L .WV ӀX$G]9 Rⲻ+$g:'05 is$x/pBnMY ~)R'J3UEkyHڀF+JJD(,Dq@=U+Bƭ W)Xӊyma]QA3GODeJSJR0FJBD&_s>Ҵ !jdKǣAPTh4UNȽu$G(g:Az.BR"x$ ꃰ kePI05ZR$O qLUq֏&1ZgX cq$hn0q h.GG"*ZLqQߥmso燲0ߟT.ezrU6+~Gtyb Lf(gW?0J('sM:J}ǎFn;1*N왲7WK- g%ol$F:z!NSg y?z U ~Գ 7V\wĽ> }.&g(~uNTxs3/gq_^\NƖs!y3.m6?BZI]-Xwm,|݄/ ^-Mrp 9dFt$;O~-$TnT4,QYgieXe̪DC84#G1b9kOgknNiyUF_uy]WWyޝK>̚sįTE&[8tP'^/:Wa8=Au۫^wϗW?c[u#pCL)~=Hn].[^ͥeɻn|*R;{|4]ym˹1 T?v /7g/&ӜUٰdW&zL2\WPl472ݹi @AZn*t$rUc$l w@x+H2 ՜@+TioIx Q+K̫OW>CzogG;n(^ (IA*Gxh!a>Łvz=gogƯ.6Ӊk d륆[; zmb{NC.w;CntźW®'*CR~|CkY#p)lN4;.K% =JAzq Xd82HM[(" $QO4G A S2eG{{KDyw;LY`sﺽ:>]4 xX{dbz9ZH0/Zakoe_|Zga7.b4} 4v޾ӯ$9g[ V$ 魧.]^ߙ39^w;l]cszz}8N/lt-klYں}VW7wކoJ^tt=oqnQ:n6\Es镻^Py_6vAT[<45ph~Y+m,%+7$љVTjtоRT=rjt+J,%:X(5|v1>H>S%R1:Mj\3^{k|"TH/3-/]Z^ڣiy%*5I1H` AimRQbڲ A)BZ cSn [t\݇/[(KU{Ǔ ;OQ.圐\Gr,q< V;"D[;AQLkI+h$qɍTy ^kbB¥l^t\&CI1=):+yx҂('p(*&%BpLD HJ:(2*@b=1`d=$p A42g32*Űf슅0 m>gfܑ[oeyhkO=&F_IKgR`҄D%ŀ&ֱ" 6IjN9([-S|dg#{YMtP+Y };blbFa7-0CλA%-YRb`)Ęǣ P$25T&)-X8Ꙉ**y4!'̈́NSg3"~8=|h٧bZ+.¸xŵŒ Ip>0 N+&!$y3@ :Bža1ma<1 i' Iފ^w~`mK`s^lICǨ ZkAp./<@8HzWgq5 n,qƠ|g:% %(5,9*51HpTZ'%Dtm]fRyr^hס>Nϖ_fzE*NN1SNzo1T'~03h o攤I4ɼ*ha/uU0'@Rw4)!+`( 1b7TEu>#BB/z؃? yޝw!?sddRݽ5؁-6'?(h/j~f QS)(g=ȸy(U8|Z==_|0 rY\MqxM4b(jVhbkbW AMjEqOʁߛx*4bTC(W?Y Cd5SjJgѠzq>kF+MNsKǧ_q܎MΧOa56:iNzgŐ˱sz\_5j "B1cD5.n"͟Ua6=ʇsxOf$@ŗ[wjøo7Uȃ)\9pR g͍ryx(9j:ʼngtFö/_/ cQ-jc+nYO(~x;j.vLT#M!pE>EȬX"h97k.c;J̳|(v4%[i~vMrYuEѻڎȆv6.lrZ*ċM"E5ھ? >-`UDQ$4.xlAix%iG9 <)U_UwC Mk¼i5rzT/š>={T#ˬw?jc]|̍'eW/:#ufw7|;ƛg~æѩ#$Y{|̥;"_hQ]8eRBG!u&'(s-V i VOgӾ"^Hs`gzϑ l< tGeTˤB^v: q (6{z? D}pc}0 ?82z#o$EȨF ̸:@e xv>M2Rɗ兽Ǫ>g3pr  ?)ᇐ ?xK'1QƳ#X d^ m9[Ϊ:Hu\Їwd.YH;dMfNE.i 6fezp_NoQjqc( _Xb#& _\G(Dr]9p_ꪖ&r4c5&IWg.OybCUo 7(߽9w=ųy=~7|w,]1 ~ |{=:]v,[Yh4+L.ۥ~s WwL lL둘bV}]>eVpt2avLzef YG)k8ұRZں<zU>b^J4)U7<ɌkO~\-GHߨY^»VfwDEy}HNi =k"IEB8$<2bV:.R 㻑䉫s5h[^[,Suu< cV]'"(kK# PZ kFQv4#$?gS&{M Ѿ"KϬ)if%0ah_HJr+(TZ$NuBDV6+d J\3uxm4% b4Hp8=1B\E`p\C1q( 8)PO'acTW{XJwd!';|Ǜ t_⺟r}e>jT.&EBƞVz`` ZUAK:6rlO|05u1I&763>'TɡG&J*&Ζm 'K1_@#JV eXjW\R5U&MFlW4ߨPΖk$Q!':z 9|$|M 3:9x\jh@rgj]ClL/L$*y$k>zYHD&G5r͕4P$!f1(2Nh_kˁ9ˁ<Nj5&xS} ﳰ"q~7_-ro_'x:]z{BJ?rz}\I ׆(S pZQ4~+sWr&?h!Ͼ = ADꄌB$!΢⣜t ćyv_Xr qc}'j/5ym A6o .|DS)njReIJsb9%@$Dnq.RNE 1 c$2)_"lغZl5u=cJc [\i em,PB >Ub-NI CL 1{>f[?YAMR^"I3kL H"A%`mE D2sZ$Q#r)).]!=MRzIPlNR؞HZcLH,{zzϴ҄XsvsDDz ׿Dۜ٤q?@inmq(}<ɘ8ly:\l38 ڊZHՎ3½RGڼL|/ÃeϔJ6hR2/ d:6Ig)o9g H&V(3mrM0OHe*Qϸ(('jfGQ8$[gQP\4qQq[3&Tk| E0)d;L!Hy1@hKx0K-ܟ !j:fi |ҙ읧c'3y?J!T Oo.J*TkApP )HN찝Ծ'۲sz m"S >( \$SAXd:9ְB$j$c`dtTZ'%!-W37I+| ~{H)lQn)24ko;;O3qIm"Qfe=l^lNIDK2iXu G`x¥$H 2%d,ǑM#xC>#܍ Nyx64"Eu0:][Xb{5] -³uwb'((ASN&YTzB~~e^%#{KAȎG)<"g#KC^G$J,j&E2 #$o< aIR%NSy2]waZ@;ZgF/LrK5Cݰ@ъ:kxk4mi6RG]2,1|9?T$􍰮S 2/(A2ը{Қ]n FK%@< pWvx_/Ɖ2QJh7JbR:iSb *نew@3񇸦imq9c^KmpE}^iQ5eyM"BhBcF)wѽ]ӨܼosB8Tߢ_p/{ާ?ƽyի=Q҇ij1'â/p'M/}ףguoo05꡾6چkHDQb=SUe_Ç{M[@3&Ѹ} س߹qSGC(3٧*D0(cOf2`/&̽9v~|A<]͕S(J .EhA ;=;6\RDQnu3{W8 ބAG}aV0d߫s+nI&qk[T3*Wמ練+Kqrx@*"uu1ȭ3mFKŜw>?kmxWϳ|(vԨ(84BpqVԞûHe`,y ^աS] a\㢾Mht -O4ܳ`Z3^ flEA+7pʯ_ϯysqo1g0 8;-gm̡V:Hu(Eq?h p:pT`/{n'n0p7/rju/q_ۿo_QeMRG1H{|̥+?iM3^;ZH' ?fl-%Kjp(6 we=\ !B#^=,p5z݀Ο?fNwQyCi~sfgVvoS޷|؟M HӖ⑛YPWwliQgQ])v,6sb'wl|&u6tPWjޱoY oWc01l!-ZX 6U{]-XWF0 OO<ש^ =;|뼏\Xe PˤB[}<1r8BB9/G?vPyN>br _8vm4 Aцh90Q)3.i :Aj0&HɗUnqCQ@n8xJ#.c!Yz`rƕыb5fHNbL DD% =n;-? " n ok&x9F#}dkCHt L&/4H?"œ$HMQɢCFdooLFLd:ѐ0Axgy瞊pqQ[%tbsr}b"+%PԊ=?'6,>,NsM. 1^acYq]V%P@M5މ_\K83Em SKi!%u|t D`J%ukRh68W\t:LjힺU[<kc=_泋Urv1r}ϗCoCaax ;+AȆ+ 6^IȻt2le|gzȕ_r^ ,f3s6 ŁAkGrd;9d9,K-Kva2Iwb}_.'T⾇z5Œg&}1{Ǔ/g̅OlS'/fKEzH)4}1r6?ʃΟٛn+njhNܬdӃxr~Lo_^﯏~p?^Y\HR"j{uGkգyGKPi?1L9 05mݪB xARNDbW+Y)SU(Z^r)+6E;jS;.KkSɈ]&t߶\Te'C0^i.%2&͒EB g"Z.]nKb ,ft.D,' 0(.F$Tl2#9x'#bg0cYM(3Jw~=K}kfouk׿uwFgB)X}B{3۲*|\;V_/zh~'}^γ0+Wχh~g~9?Sjo~ˍѧMTVzr:5'?t]u:Od/0S}ks=q}CGaHC|/&=M%IE':I6 FYY%e)蝩\Ӓ$=m_@?3JwΟY8$Vz({}n9<2sqϳ *A94 PbT1AbR|׈.:Fn~ߏ`:R3gXW5rKR!S9l05 r ooe6 O/% EE{<_}_nʍvEٸ)`ίknKC[cƶGZBZJ:6EZ-=*yԬnYnh Eviʘ|$jK^>yKG^WZV+av.;ZJ.i;q{wo~NZ+w+l'%̤$LzQKf$SZΤKEj1'ww 8hՓ-85,NNd:-gѹE^:ɖ_:ftb׆I{mտ~YAswgWNL>>]J_0x͠q\5xLހL6s!=l|]M+ǣ'%5Ɵ3vgS2i2ݺp|6 DպRF*Yaz!yw守.u6fTx5>V*|l}!]jw:Mj~>z')sYUSemAѓ]5:+mTR0}Ϯj8=خj8(b3t4УeAV Nrgd>!+:ř.{A{<\G9$g2Bj.*!20^y` Y#YAi瘮ύfNeRJ`$I~PU=T,y3C=ULݴKt>_rыJ1UUoAI療:pQP]AޣkpT"2F8KLvN:9Jof,G'J%m}ȲC8J+Σ4z3KLHH0ĹwN*9NlK{/be Br Eph &Z h!: K7g_y&CC5%ED*U##S&\N&a@*h'uȁT sYm]BY4311-!XxZ@C,$dPvyHJߧU60u 42Z:֤CO&bf:R5?n$^(4XJs3 :v@:6NM!tg`ښ6X ݁{rz;r^~e'ri˪4SyYJT (Ĕ#ӻ6smLWw8lR&e@d#4%HIKNbXxdu :b=c~pU~hϞuR xLVc4C@7uItKWZk|CfU൯KJ>̴0͠ϴL[f33]Ġ(\ 1%5_TAJWw4:$Qa%;wV~tQ*&&||׮_s9/+uy\s}Bc踒* VYCJ%@H \-G$if8q^ltH$$ZK# |Cr:zZhrHZ(΃ L%Xyx9 Y({n&n갹%MHe)"l̊ ixqI,sG-y߶XKsߎHOZ~q5N[ 8VI[-t3K$%3Ld)"Zj='Δ¿wMƞh(ᓧ-ou{i:P0D+cH^" m/˯m?AIm:72Юvz1x47ўywvd8yCM˫Y|_޾4Fj J]{3!OJ#gـfj<4ţuy4\uSl05 rooe6 O/%1EUgw. _κe3~9PdnfFz4=\swt@!X"#ͅ]`)i 1)+*B_R}kЁKRƬ$Ns\vhIBHc:u Y5x_'Ѕwj3bpExklRǼT[c  8u9T5n t @v<5(l!ϫ *\tpUZD z8HUvoL~ ~f*9_)(vQݻ\Be)N̛ʴף14P ";w/Bo`R78hAYRV Bu>6\Pɥ֏-r3V J#}AL-#ҙJ~@/%Sܞ< PLA&NC FǕ _7tJJŬl_Sd\E!\W<&CN?oP3'y:*Cf~Iq^qfbaR{qE7ɄGs>` 7c ggyz⢣5UZW+|ZyԸaѰdu;8Jzsf^UJ*˿(slBYc>6T_1'gǯap jmIZ#hGZ63_=}rH#Q2lUL`U T4Gp jo ɵ|*SDRL͝u .*[lB3[͊my9*b|,4EaһЈR* ZF8(K2l1_u͎q'2oil7+YLQ1wzQ& ̈́)7J`T2zR(0-Q+SL!ۛZY5w ?V6;T%ӑ_8 u)e %4ELyd/כ0̋?6y;&Clz -UZ;d*jj9oZ*]J|&t\oT|:M1XxB@_fusaUQFRϩb\yzTڭ eYʘÄ N1qڭ uyG_,Njw孑aߕKy@,] " ?6R\MBYj ;l@fˬM[8= V;"=ʏe*ؑ2KXG!ьKnʣ51m^PbT\w\hԤۭU.oO\~W+yz'TOF7U.$C[]yrR~4bͽ5=ec0\KVQy}rc%B"rLD `$ψJd` csT(C`:&YtMJi*P.!HUZ8c[,-c!;`bጢl/IU_߬;Zf43O~ݠt];|MB\:&**`4 -xa)MFTۢQ5'ùnzÓiƞpg6A DFж#&&, sЦmL]Aָc[-8A]^IRc)C JPq".h&#I*7 Q3A#gLP8"Fc޽ 3vmxmxX1w[mQꀈD\h}ȴG%@&@)A%%YR 4$!488EDk(Z䒤RpjҔG-X8LE5ZLh4!k[#g="|qq\6µKָd[\4->Wq +ylWs~'pLam [BNF2f@{ FFGuRȝh;&LGp uS Ŝ`hs&q.۲9y p4xS-J;D+I*JyQGZYj m%=!:F * JKA3"$`Z$BBC"FFYj֜INΔt Sg kZ#gq><%Ay\~oǨɇ+`wK}dҾ"[6x%[NƟ9Eu5H@8*$$YOr~8Ï̫$}po)H9p<`4Y =ܐGgבR((I*H BBTh$MY1ox4~sd"x<:zt^xFm!BziEc|0OXe ֗ˤ0F希(<38;I>HZ grK'R"&!(-F5J8e%"!QzC(0xc3Z2ř .jc^*OkdU|B mfe?i L+`ZKІitE^$c hJ5J8 %@ D +u .u]%A>ntK") "LiJI/"9E@W&!6? :nXKF/e+A*Hi0TT^ycs"H =RqRq($ Q:PI2 5K%'GNP&nPґv}փbg<.{DŽjMC ՂvZs="0)Zǐ`$ש刮( xdx2{2S_\Gg}l6;?\vEU7pS ]$,̾~tN&Wû"49!?hRhf6^ZqƒgŝgjZufďՍWדrvi+Oe=휣Yu&%# $#]--F3kAAQ::~f:1WɊSZ/գNr٨˞U+mo!g`*#y`(A*GbPM1pS<[Y_.Bwxկ?>~u_SfO==+\u@x) $$S7O|hIǛ͍dhS7Kƽ'>νʠZ gO_wqQ7j'o:ݮhW22\c_iNUr MU["o/\ B )-#q|iPjS[OpC) o*&rjC " +T.%!GOFLg3 qb^YvWFupuJJQ*G ) B:||p/wcg"Z㉭"n?= 3-lOٮ1LUOd8IS'-D-%@#WR:K,8Eɔ@xI6DNu,iw63\:椢Gļu/@A=MٻF$W cSwF h,vmc,ca^#nԤR_odIQj%*)RR-T"|2#Bhx1jJU.D/18-(&L\ gѱ38=}iLR|%Yr~nxtzۮ\+Z)HG.\BC ,+Ith~+{8}u՛a'χx}ۧǜr?RpCӗoxqt]w*k ^CTΧGC׏D/] 770s|je/{⋃nDSOb;=Xe.}yԅG^+s|x'i//DJWcT곟^~?~?i%ۭILWd |M߈Tu;n3'1^uGN:۩ P\%{>e:}َ*u[X{s?宵 |v4K_zӽdzmJ%+3'W?6o2y l͗y2b2xZ9M?CFTl Wp9E4Gߠ9Z^%6kilDtVb&%ZMr)2.a{Su[D;-H_mۦ].ZK;' TDcYd$ǿ6j Q Cĝѻ9Ya9qoouVIylq6+| -9v/in;۞Ȫl44Z9Zd_vDi%qޘ=&n]6??Tu1/kmdm]Ջ1/87mgc39*CxȊBM"Dv\^f]! ]Ea7UUufx?r4 ql] ԻN0]'r8<R;frrrPk垫8U*>E%E ֘-#>v#sٓ #fjgdAy*8㽯ФF-1Oj 29AhFUqph/1{qZV xddN(eK L;xVPҗCI[B$6BJȄ>:NĨj%Q)z{ 5O"#WY6@7%vo(AVo貭&D8/wg(84 l2l9NRbN2Msu9Sjھd`dṵJeZ2J:K#J֖^ u 4(dU(bv傰iVʛl͒8Y<<2hCE#&7,A]urIӇF^?~cА!NiAͯ11 ::l-.鲬Mth38TAK(c%  8qS?cg*-KRA'DFT6Y-BvsXITgOC|d* YBEg}e I4E-reYM:7S=О[Ine% ^W*+>Pok:1^^y|7S _^* ^ @LpHj#AZRx*u1P*B7$+{rVE.!2~i- %:[+d, 4dE SLn~@N2{ڙRKBd%YNO S2&f9|1>? _,ٓ!缈)(Ri>3ì1rT"̙FA4Dd琋>E6 )5%x*Ye%$+uй$L\@vg&0Mm*nXrIL0V׽m)mhlچ;k)=wѥW?-@>0C޹Wikt4/TJt6ڑ3%)K)G !YFnf0D>l-5jp6WmR)x %ʀ-;9wQƺ);w~Hbq~q198f%X?yYL"qv,{Hwf3`y`Dk=xH\G,2SGjbݽ`:h7NhKE \A*844'399fJuԠCʁ1K٬KqR0=4{O.dSXU`O^W kmVm7}j/eD .JSf](TL4g \M.s'b{yD 5JJ:5)1:zM >4eè]oG7(^Oqmu4E3iIe ?۹5db9XG&'#9FM!$b^Bw„Y2noSs͂"9A9OoӨ Lm4q5vrkWUCߍhi"<粌7j_Ou5`Ox ԆS kR RI!}H D8\R b2TXj(׾r :ʂI*1@Y^'9" Y 1UA=ӣ혍QoX̃-Y8gSw*?{ ڸ8hOHI&sDKcEZuZ 1~u[!*jXU衛+%]<32WBhzN% (PݻL,NEj<9~=j=i0t|?VFOȌ:.IztCzhk8\tRqa/\%J&cL}{fFN:&#LNV?74񇯆}Q(fa>f 5]j{_]]+~OV}ϣWrܼ혜We<_p-^'B֦+׿~׼{WPNJ JE8թLL^M4_~s)z+ZǟF3Oia&.t o+罕& z켬'M&,ڌob}qn 8K/6nk#ba٣b(.ǂPZ`б@Ş#B*jc1`)i[c7aW޹}v$g.],J]zuG/zކ/=]Sw@0hZEXFpaY7xwڹbcޟOZ)B wV#`ScRDM>G€Ж}[%@5̻l>u.U'wѬHpaؗ6[x/cܫ j7ݝ UW4Yխ'Ӆp]Cu[GgzT#滺NޓfǷջ{Q +6'YT:CieWd.4{E /2hiדB5t13GZꅢ.++k)3DZQ +:Ts,Fϻ]fQ4Ce#z`ڌ>_omaMy)cK8{lz[a]NԻ+*(AYX*v2Er2t 8-eǜIu;V˖) B[Z\$KLZjr GۜtNR6nOݩ!;ƍY^j{(s=d&(vXw-JW:d(z;ff5 > ,z50>A=leٲLYhTVL9AFEjLIRZ$2.77d۱|?7_2xqeȾ_uu1;]RWGW܉+ލ{.oK_ ,OOv\Sވsl L~ߦ˭V0ֲaQWɟ+v,>'zx'k>)sK#8Nfy2(DO2>hF߼{6^5&lfۤ dzmY t&+q6i 1%7jU~X ʏp9"W#JW\c:\5+WW|[̄#f0OGSϠYk^ 6+Ȯ#\ nt+jk:kJGvj8 b]5shJudJ`7Wᣇu{M[E.[#?g_N^]cQ1zүߖ{Qi~І#(vV4Ny׏:|WP+T@H93D@i/y?voGS0Hǘ0C@+bD`.)ƘrFk W}Kii:t l)|η׷=k7¹gk)\sqjmb2g*Y)$ÎB`\9 Tm (F⼋:o jfɟkůzm5<~;vJxS݆d8Y+SؓJ(U;L-i #ݝ6徶Pܷ_Eٴ8}Ywn^>e{?W+ptmGx;AZ,z'"ޓ"vsüלm 6<ӄdK(vq@Ke@zF+3a䙻IL>, Җ<:5HKa^e 䚊^ lI5PX!Q3ՔP fPaW!jcsƤ V%;S;n<3|0RƧڎ]^\sw?9OYצ }jʏSջ斵.IWW*ڄ5&Y|50Kim^I4 ]64vDGo͛ :bmY#pr6W)% &\LƒC(EUKGi48"-{a1{Ye+yWPCzR]TΜL+0#s#$J5 zp5Wٔ/_°u u^Yz2]XʥP #H UyIŒi5ʺk >DO:aY_jRz>F6\aWJVXb(:UekWB㍋XgJ9>v@>O~/e~ouo!Bֱ|$4dX5x;L?;+ hR597})Ĺ)4֫p€[.€:[[!dc.PjtKrTpTVP( Rd~F Q[OsAG:7d4`5*mR1djkPL Vuu  J.kā )\ P (%|\gfsd MhrDgJ -p6G.ʠph9eyFa9$bFet]8cx1SanI ddbkQ9j \Rj_%*"7ˌhKTu? |h#@:K}C[ʋ<HFi.+WW_6[q5Xg_/Jx  j{,<3eJޕkO/۞֫!o x~ateo&v&27FESw7a:_QDJ 0B|jePH)Gm1Z6l`]Eٲ :-Baߙ"۳9tY6_'ȉ8+G KOMAP)Uqs{sX4q59/Kr4KÜCd}Qذ!CB0)]+X& Ta&oj/ qo |^aVc]?Ђ&klLJ;%lm o~z{z !9C8] ![-AFۗ+u`:fb r %ScQ8h(紲 (QRl%#[htT0xm!Bp轢;˜|bBKtXg/_nӭfiqUһ5K_PzklMb{Re2&gqEű'>rhɂd.uE Z±a%=@@lȵJI!BP#J`@ JsQ#Z$Jd}u%S| ¯0T͜ v*ݰ Mg,# o,jv15Ij0ӥ~]7~'?O8bbkPBNg$ZUE)uWBr,QQB؆LE M셬Q`8FkξSU#D0'O2iĎ"1mQ;6=2F|De.*e5kG ;.b"T͘oj),̰ld-8 F !jJ™Ku3ii l1xne⧝:x?Pi\A1R ۜ`s$3!&tNGrz$ OC g/ ^>CANJ'1$ЂZG8eᄨ -{6SO 9k \lJMBgcзI!gm[b{oug}F)S>nHa)DZy t#WGVyV#\qG 1l2 &xQBA* nUX$$>tGz ~hqqKQfҐ~$ﮟ^U*nEbkӼmA?38 BA F[} Jz.|,9vSkQZKcVzbts=KYT\3,A2EP|J"T͑sn{@aL6,ͬ;BN{]kVcnF`P>"`r1ޗ."ПɁ?M &!O tt{ ͠ض/yM`KC4wגÍ"doOA;85U?T)4/~N0;69]91$[N.eﲷlaGdE0Њ8d.0 *q"*7.LMYy*̲7Er&5p̩%>bF$4cD U=BrAI?~ u`sX鍹V+>f`l+svAru0E`~;K (2 Rq0ksE#ϗ_۾c?cT}w^`Q 5/0]d!%٫2ضGô(5Fr`L'3!$9|b9ooU:t]u7\LV.@^1xxd{bE(4g{ˇ ȇ%c_>16d@iCo/l,79nؿ58Q/6tĿ#CTe *5_M=txm8yw?Tl^xdHͅ5T~ļ0xox3)*#&{`>mhWE̪9eȴ@u6%A1J-XTu؊<Ϗ籲@HFǤ"`ɳ4N+ , Vdiհv]8\aBDTcԋu]#eydcBx0", I)^|i?4tl\[VA& KLa@#FZub7 –>y->?쨐s=C#c!0LTI@< k fzG8z\ld?[.Ѭ3{zĻn^p跔1{b`ϱ$ORZ [_z0>45̞6ÜS?".-ʽr's5v((N1-6$.dnd $xs:U`fE-6`MHtpa9\ :.+{4 >ݼwlaPx>e g_~[^LonW]af"8Ӹp[UiM_`Nqb ﲕ6Uogɼf'o_M^- gpM墟o0WeR% Lww"k3w _2ĐX[t!W5*=N>>:zӟnhrmmuukR/(gLaFRFcN%K |QicPdCeaa8sk۷_}uW7_~{|Wo|Οp<`xPn9joV5Uly:|zk}|$^BZ, Je}jO\4;,N=abz3?Ju+ЖI(P*_?w Q C' Zg`V$rܴ$Ϭ yD(,)u14)@ )-r4} ~W)p|'}nÌbF;@WZ`[g6w/Hp0 Zd(b}2&Ы,\QF͉VϨw4~!; wݙ*uݹXչuT&ju)]^Xgg@>ќ s N^v.f#ƹ#cJ;jrG:GK3: ? 3.AGAz82Ĕ#SohM[cRĬ€[X1 xe4zl5 D,#gKݧJNԠ 9)9wyve&oj}j޾-@nVJ[9W XƂtzvM7oxI{{k{'vMu*067{jo1GøQpŸ(؛pS:eUwpa 9km ol{K0}lϥk\:D9cTiPP0vY4a#A2?9HysQ (>ZyWXT>v07ܟ":t7ې FioaMὴ%>YmuC_o5 cGnֵ<שHu0|gz.גxVqꑍou"ISA >{:$nI@1ا؊mШg/+/}1w}aKģ~k@%q8D3|}ZvΝKAº( DY k)bX3;_^z|y%X}M'9 =BWWe <|ó E vˏo,6,W=apw* iv[7y?zk8m%dˏQՠ *' by24]& ;m8KQ<>.#[k\]JٽdLd~9V},{MDM$F|?[RcU_Xo:Zʄ2a A}0A*1j 3ꄷǏ3I˧sgжgYr HJY\zBcqIp m$L4PW7;˙Y\@󡿛eTbسz;8߈TcCy=:j9Zb`4 {_o-za3/S-ahJq[5Kqwwg[:JΗ d𝟿VvpMù/)UY^jI^(dA)=D֞g*_j)T9&9V +=*.+B$7`xkFnHhiܐP g(7!ZDW p&t.H6Z-NW/]] -+e{+Z$m^QUBd;|J n]`cDNpۣ &7J::CRQ&f=UKX[*WN:KEnU,qk*UZxeP*IW_]-\ 9j7vij'vCP@W}+u W9ZAW mVUBXGWgHWD D k,[CW .mUBdGWgHWTix*>#npk ]ZP@;:G\,:>VI8L0˧A|vH>}VYnߘ7h{5Ky$"kY"MXd68.aE\qXtʿW^fqgjٔ&A#[eqFБcEG78Ŝ9s˭:F jbb,:6Α0֣B3Afj텿ڭĨE\ۣ%5vV5XP*;M 59Et5ҭɹh)MGWCW -5N'孡+?z]Z|)@) J!X`ZCWWUB{pJ(I4ut!fd}tx\q"tP2CWtˡIO/]npʼnUV%}7a>t];i,u G )솖UBIGWgHW#EQ*ֲ5tpВURututE7[DWHJp%o ]%Z4hrv]#]ij^wJ%Nвdےh IڪQ SRZ]-`[#7$5jNBH冄Rwr9 KdI)k ]%5jNB+oI(eGWHW d?{׶G\qZK䍀aÀ/ {yyHljG6D7EJ*MĮa@HvWw̌ 'Z:xuA)UUrig) OüڱZMcV,4 z^!+aI{Eݹ=D?eQFvOz<-ڥ}@ 9v5X?iۣd pn KV=yCiTrsDڅѕcX5j)th3ʝBWe +bŬse~Q-t *PnI?Nn9Z'|bQHW|+bdi1 F]JJb%.x2ܰvF*WAW_\ND>>Xn~__4`G1Gc~>)L>2q́G~wvs|? x A9#k'τ¯G1͛׹Fԋesxx]rw{rw{Mn1)*OzE9Ud'8q1~^y5,_M軯T=דqn{̓[>ڶ(D ~s7tzh6DvD0+@#w(O薉Yx:xZOo w3O>0>a'];:[;7pZ}:аF*aMҩ` .iyZm$Bɨ|W!'c#ZZal%E6 VAfY0H!Q!sI2Ҋ IeP4BZ%;6:w BAnQxi h**u(:ݡ-!xy9xi@yۧGta0P߸+qA[,X]A8&_XLčd Xqk ;: nhJ}/+*A[` HJƬG6lmM9ԠMEX m {Ze3@P: RL1 v[]Ye; R5rwU #" +ʄ&#I9G1jTPTZDJg=l4Lb oDXqvL&j! W۞dWNc@!7CAdܡ ?l9|"(0,JLhW4 ~DUuASPbt,,x:M;*q b LN QCF:|7 d&/sAGM-V│DVDCڳ;!JPA/uk846Օ/H!8I]Z~,dV%׽6B7 2+=tDRH NJDA e͜`3d围(n i${jE}∽}Ai!:3aP/yrm/ĥj*٠db̠JUYa:iB0`9fLW~eU8a!\ ުa}Q7u`mFL`-$ >:gAuPiP\J7#JhIWo#f BX9Y4<#y0 _4+Xq=KÙ.M1xxLPBb2$kuk+w<o 1ttXTuaV% 9աk4=!V1dxaQMZ uY QHkC_W."O֮ TG<)} ]{ #Aˈ|uC.MAy1k"! 5y]%@_!80f;@]IwxEc2{"<ѳf W nk 9%bkVcCX DA5DC7HX] @N%m1Y5H#VF/C̓"#J&^l#0  tf% Ix2$?< B#jw7ygjތ,*ga1=TV( Aw8+B A1)'jjcu?~X+,*j$k4Y,p(m@ [SW`$XHD)40\GjL!zse_+Ϸ9@׫U,|OW˴W'gs=W&?x@^,4zV6 `- ߝU-iVn6fY#e-6fzz4\}YeF٤#väDy ֵؓJC6G=T3ʍڛ/`gr5wn+2TJ`]PLMYځ DGCPCz{:T}f=. >m"$uSjCn ẟ,;:YycaUè#W(ᲨHitrIFn:T,~ xm,mT11mAiom-_tfzPkFQLzЙ dU Tk>Q?mAOy~5801^>@|5@RMwUY;Tڠ@4XAҬ5MD9 Z&PZ #Wfׅiq#0Q%#p45PzB%aPJ↑$5ێ4JCi1[.ՂʠriL  kȎEЬUCl.Xvk5c4&TsE}#W0Ij?&lm8w[qzp@?@~kvEʫ}PĹ߶n'~e(7έ:2rXuYfvǢݛz-heu}xv~< |xo{:[8w8 o?| ~[Sxg[xt^qjH㣛5GyhuӛPGGq~7_/߾&Qjp~{֧W__OwB;>թtf.gvssO}}16[EӔ?zh{3WScI3+vIfvNj9`z9v+"FI$vhBحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbw[`-nN-n!.n x檌ح^ ŧ[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%viJ?-n6.n[Woʠn[Gڭ( gحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbni]aB:j?Ij/_nZc_nJw`v+ʉv+cح~=^;2V?.pg]#v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[cڿ`t/g[0]Ík7{ښ*3603LZ; p^;Àr (߱;싡G>DWbJgZtF'WHWƪ Uj1tp-:VQ#t Z’nb-)gCW Ϣ+FݡP ]"JޤLu>9qj3GW_71xqgy}ޕqd0]l xdb` bdluJ)a[n-xDB죾z^׫@CGfl8CS}_`gu{n{5xB+E_R R /w9 \_p7co4o qaй)|Ok~}ݷxjs,9ϙ4BrȀٟ2Zug%`}1_W'UӃ9i-tk7;ĺ8wل% .]DE'7x8-O]}5ΚD`UBKɩUBhKW/#7) m]%\6-'+l7p}t%G40ogxDW]%DB^ ])i]`Tc ژ]X=v\vGZJybJ@W} I KCWxBS j)N@B-]= ]A&JWaJp)m ]%:]%^ճ) U+PS*U'J[zt94%d40Oyf▱(._R1^J~ڊ 'Ӟ[h>p0ͿxMAFU X 9O!|:^dmjA e3tny$i;9F ^VL\G/>8Y H(򹖜N)bx~uruoM'˞M;6KQN)93Z]pɇaV8?!eKTS6Pm!nRgXT#JG.9&BSȁ5|uUۜz Sors8]%tJHNq%4\%BWartP6L &SѕLh3p n3ғ_P2 +At5i]%1tJvtP+gU*Y p5RMS+ҿ[v %:n ;G2\vPnhő R]Jtoc@ X7.G)tbytP Zz" LDWXjSt*%T1Y+eѾϲAjt̟ bM?-Eτv|g.WvV5DI\9?RW To)#n:Q B.2-r|&˲FfEC_N 0Y_VclNE9F[eMM,Z09>[uyҿ}/ y #x7|:|a_ MGN)n઄O\oUEV(C+~one !flx7屝ۯ.|d|-vۏe1. 잝E5JIp60&踱$Rm0c42Z'[-Kk`b1fr> (:( L?mre]ˆ*Ljd<)V2"" ` Xy$RDop]~)?%L< [rl,/;4)G.L47>w*t6Y0Q;i؟]RU N?lWclڟ-Kڭd﵈'O-*YAӜM 19Ys`[3^g8??zP~h ;Mj-#.*D,89GNq$ NiGH:d =!"}Jeh CRa Q"!0F,s$#5<>?+itT4%곍r. > A DC雟WY`,<=*+('Znal[?|j|gμe>-P`f#l` 8n6fk9ثcVTX%O:tzYBI^9D.[tFWh wd go1Wt/Twf_8Oth[\5o4jXt0 wzs1ڧ3wXv 2۫C|WUysr;# KPG*hw7ŋk`}QZ}߾{\mt<˓ϽU6:cnߥ- {0Nn`M-"m;41*}Wj!6 J#68R$#/2 <^s>c(7I٢a|,~ _z`< SY\{j)3*Tb"!UdS8/)04KɲWC#<U8_σjW$r"H^{~u*M93wK JP]TEAFF -TJ#IIO&r,s2J.uh)*A11smCpp+S&OE)|I+ZJV&( "Z5Fdp% .qH%VBV'gϣ<}XX%mrFjn*5! e*8ʝ7\OØ8P=tH}JǂSѷQP-+GmbgXR~vS 0S͛[w$1Udhm>K?W5:/aN(Esfͭ$G>p2꺽u",ѯ=C끚B7UyVc{˱ϗ"c<V頼VY`"pP"E1rKaDs,8MjNctE.@QAsgh F刊`) NP*:djy|Y_8Ӹa5Xc׋ά+ >E{haG J*yuy*hel 7n8s&I̵$G{™ >,ԥ==EOˢuyYϦ6Drd K DxV"%-ʉ70r$1eO R> )"ûVSRZtgkYͲ68[:~K6=UUg I|/6۹HۧC:lCܺ6W2bY@ΦUְ1)@%'YR)p,XX LPx&2w텖[TxGDJNcRYW0YTqXˆO#@2Zx, 2 B`H+Cۘ?p*:Y@8(#(b!`B]h(vnmxKSaiBLa@#T:c1 ꍊ (l@w mJ9xBZΕ g@7Q% H9A< k |3ho55<4|7ߡ]-# RϰelBl)ps?0$y1dd\n=LWC #E̥x@=xJu(b;r$- `HAD0u. Lµt&S N|~l(85iatz0|鹋yE0oòu14>} 7D2o%hoߪzt+5Ӛ: \]8Ug0W\+)UX[u|-TG~,M?W}0 GA VL3pпY4dUf7}r\T.uQMnR(ga1FRxl > 'x n(ylfAy8s_}|_ox;xku-0 .em A!/㺊ESŶ)YyoQnV{GJ/m-bQ!iכszajDW zߢ@T !@C'6eRِ8^Ѫ._$'yfV#FaIsyɁR[$xV@|WHHIpXP^^?N1 Ky\'2:#0{Fcυ\pogPÈ$FQzhl}]Ot6:|7&k&n֑3{{29S[t,*o\si Y.98k˿r묯%#3} 9@,'D 8H6i8KC(QTGƜ28=xdH<bY S띱Vc0Axe4zl5 ǍK!OTc}d>l`$ϡ[cYHmw=,Rur X(w% S#VΑ(~z~@vhqK\`X:w m`1>o{ۅzvu7ru=у;,6W]^߸w4 /v nnwO9]t[8ͶCjP˦[woz?λ#zh=?1k~6ð_=qvpunOP^_5= Yͧ _~d YCe 3pάXan˄oߘEe @-!lw 8Y2jPٿ< wyx"x?4h%7BZ13dZHMX1Jpɲ֐C'b5qPk; OUz$CO95%2{oRAf\ <%Rs/%cxV[''n>!OLua= ?UQcNmLAJ1+e1 Q1qV¶H7F$X.* b=Shp+ѥDPVU*c+#bFwH:..VK}uV[%⢫vf&e*6:2$5Bg] b9 ЉEHx\ V[⡯'<Ut,?9s$_Xxv;e~Am}G. NhׁFbV7Nlx(HLNũDS=LǷ&ѻf2N.F?>ګ;U9]і|XN(mJF IkHZ0jH)AIIȝ¡ZKm(ub\ԧMy^dw]eMU1KWom/ډ ur{bGп]^LY(4u>3m GȀtC /9kyhUgI:,Pr`!fm"R`L Zu W Ύd%Tċw>ntYpgƁRƐtkv1h9c8 G cay:$`aZi5&`؈9%iwa&."4ŁfY磈)RPfq=*j&톗Nь@:wnwACn&}f&Lime L]ɛ1NM]CM]N-u$9zP|;wm4#zTF?-v4XHYfOѤ:;u/eD. a$]߿5_:\oF9s\&^'dBp~ĺ ]aѿ1gLdw\}~WW[y^OV`l[kvIna\__6ᖖH vgV#I]\]?QpG6eƯ0u]wض9qֽGz͵;5eoe*Z*[zx1Qq9HBB dMYMW}|YA\۟0]t |zqڏJHJ\z<{mQ~k0RMnNi3˙CiAܗ]ޒrc}fWI#k=2a%]:C>rhnΓb,]yJ{ vp5vh5%pP%bLD[7>Gt}&"0< 5$fBO8Xf=$2_'Q ._;Kdd!$ ReEKܒsBЫzeUjks[JR?PJH!"RU^SqZ&]bzкz{"U(3kQMB$C8#GNuwB1ѱ.N=>|,/\ !H8O_yhO0́Rv&j(Btq'XσmiD/xBn&b|dVڕ th4- wn^cf:{7;ڿ>J>j׳}O7V-{P.&:+5}[ |o;#m˰vozeXfZI/e2Yt[V([4Cyx[ۡ҇MT&T_l~{0#xcs3' s ?FyGO_ĒI.ڑ&idWZG6"g55|{K8Xo ]hi ](H_x{LBۜlwB{9ivӽݗvVs~e_tw~l~`VwG˜a?f>],~`-τn(써ڻ%jnn]py>6'z+ϠWenUT}W*U]B)j(#eFj91eɵWIQYAkу eƭ~^O>[:|&IaZsnjinYϱ:bn3dBkE:qt-S2䈠LOAu8|˹+'Glqyt}v'f91# zKD, dP S>VXrHQyrr١nK7ȑG:zRZ0#ssLNu~{"dϳy{*~/S+ R|coֻNںM`u |^|A@g=z\lOlշ缤Xu}L\ Ř^GAI7j%vT $ mR(O*GRIflXayϠv j9C 'Iq_`(h[II&[.i0ιpi/ȗ r|"C>& Bw1UΜՊu,hb.Ĺ(+\K,"0'!d/J>nCk'Unz%c}2Jb79 XQGELHF-J+Mi*d*Dg0m3mtG2gTq" / F-x!bų!r9ut'oCpK&Kǘ1eγԁ=%,Le64S_Q;1ߞ<u]#va'b<0 .|Q;xq̶g'7')F }4F6A8QsD 9 6&IOg>ƙ@pf'<z)Ì:;o|h8g~&٢>m:]m3Ow?|ǏեZOr` H^n5*F9滓߼/{Ƹ t{?{7ۿNM@sG]}fdO(owZw]dmtھ]8w:Q+ -?(cFvWr8z~#k.znO)[v[sj,O>U|v~:;aklf\ͅt!>c{f?w'7s 34gdOu)^lt#'yLtVO)֑[OGQri[FW fr;kT70P=ŭ穌.5(2ay;&])pGWjѕ.5$j{|ErjtkѕҲ-]WJ隮֨+10^;xp])J)}hZgfWU4Hqע+ t] {ڦJ)45 T`]i#+~6^'sjˏ(^zd&KhAnEtIIe9!`tU+])m,^WBYZgitJ,=u,])npJhpRJ۲5ʆ!V+ʆhL5RZJ)6]QWy9@e:t r9Z`[MԠ7ʋPz6 ˋr62Z/,uvRM8CaSr0rqM=9)dZ,''ʞGT+'xy,]).a-RZgJוRzlZ\CE`<'W\kjѕR(]WJj xtTOv\MgPi\82r u}+])n&Rڅʣ:GWgF\~g0/=$=w%hW* Bꡡi1T+ѕ>~]JiCR蛮V+S™JqJi9+tj HW ?j]xLZJוRU*T-T_FQnВQ!I;OSq#~ mSz\aIiTIK6AA|(}9dr3D\^ {ޞ _*~ԞGZ5Қ²++tЃlEG xKWk])-uMW+:GJ &TKWXt])ʺT+vJqm5A-?Rʖ]SWљGJ잓N6%nO*BznM oçFqlڐ"adNc&8>cH}o0sP%OsnnW/SuWW/6)N滾8?7ׯO~LW_LbO\IT=ޟ:۪.̿|v_k}}n;2@i xxU:f6 :'DgNAZ]32}(e`<\ZRV{JSV]kҕFWy (וRtB]9-aER, ] okyrk7˦KJ]=R\_MvBLˮ֨.W|) KWK])-umt*u3,sQ\WMgPi+t|tE{NMPVfy~*7²ʢ}v*rk"tE":Y~yK/њGCRMW+FvV+T+Ņj+E_ MWO+8N^abe r7p=Go fMl o' !֯י~wޯW|V;ݗ7~'>MiNGͽ|[]|MVm?6"~k$"S~eqYqD5\gK%~ohL0LEKGcKٝhgoNV7JmQjJ cߧ@.~$rWM<?q2;Mf!>:"b7p4(yϋliJIY^jy/}b&+ir.͝wLL4i 34%CQmfo]u!cp\$HC Iqi֥Gi\kDm̃Ih,}cb5iR@ь$5$ pɍr$c4#턕 أ]~) אMƧDf8;7%+}"ҸH"W (MJh 4~u{ݎbM^Sh U`$<&,^# a`#J'`B|3k}$no͒9;8#EcQALȒI]e׻Q/,d,V.yoaI1IՖ{k66+7&e= 4KKr7LL$֣'G&7gZ{:_!7 \i C#!uDlhWh޵)EEh'H$ᚙ=k}ά3k)􀏽`HJYlXkG`UhU%`}퐽kj $*Z@%U>8/H1H V-tC*D4`Ԭ4 3*0 e>id<$X( ꛐIv:Ci*V2T Xp6L:@@}XE[PB]Q[=ಬR uWh%Wk@ e º7 Da( VCH(PED&TD;#]g:g-JC(ʨ[sV ,,xt0wD `L `iƗ)B`@YLu>J@ZAA6uL:vA@GJ LEwf+%RI9n/XT5oڳFxwD"=dPI6d^iWW.#{/^Z#.RVK.#i̼Bիf !ѿ&伖H 안JCMȲZPҰ*4]AՊX.sA ԙy ]mn{1#.Eu&"9ih>(ؼ($0D!N&mrv&^r.tYs^L3ܪNZ01].0A烄Bka&a-A7 sB0Ta:Ut4CҕjJUe ,äc*J,J茸pΠaLG.vk ) 1E-`JɃf !nK)5Y$iNJ@K^Bk֐h T3ڄ`1bkA+}E̓pD"tQ:ft]`mB$1SP] 6}T "Q8TDYUQõ-`QypPBHcF8&gوN ˉ6Bbnt=liVԀʬ$޲(m@VIKުhQEx i AIX@iT_]ŷ%e SLЍ![[&ܝ }@b> }@b> }@b> }@b> }@b> }@bПÐ|@l_x0~ֽZ7}> ?ް}@b> }@b> }@b> }@b> }@b> }@b>?A`!dph;zPj> J> }@b> }@b> }@b> }@b> }@b> }@bckĠ`\;ez>H> }@b> }@b> }@b> }@b> }@b> }@z.9-5Vr@Kku^u?*- Wi"f8\7틀%ױۖbң-?mr5Ѹkh5/^1bA)n@zZ('* !pvge9 5׋U>>?-WJ߲ `:o>h W|P 5I(^:c顥P&qt Q$E~Z;͍>>et~n]? d͎ͺ 1kQ(#a'VOMӟjtX?ҡBѵ1v,Xđ@YOQ(-ۢp kn 2n Znx*oxá+`;"6;]J癮!]jR}1R`.~%bNAd@W[\-.N.bv]A%2i峓_Χl8kl=vv=~Kg'7޻|k w'!uG_t7Yj[[m}=@MF?F4#N6&V$+y*@mém^qGˁB?]ۓecTj\tEAOT2۹HC5 &8 _er kCV=}掝vc[=}qu{g;ޛlkvr{Ahw:>i!vq0bjھ\ղMKut8~;yPlL |\Mr%'xҳ\xubݣ]~.{ ?KxZ m ={ܤƦ86qYFeâwG~ ~kUyPk36bcVMu5JEcAwz A~gBnGc'Z<6)qLӾP;\d v ;ˀ_'"Nd<"zg^=mCX 7J*5J}JQVrt?1 t(tEhѯB 2]=Br DW e9"n0m8zu^zGHW.a^ 7`UPj.#]'EOz8C+B=eLWvPWɡG_ J4t8T8ս/^ ~h@~(͑]+tC/]G~pF9Z#±ҕ«!f0tEp ]'p"R3]=B 0bP{- ]\BW}1H(`ztdhvwt-@wӻ{@u;, b@}[f{θ@}-_U9g'^襄N׻?_Nv滘p{?g>? /0tg[lvO?0s|xws0ͦ꾏'71=\/gY}c>뭴e}1뭳3{f~nْrj:] o QlY1߽KGЮMѵ;?'k4}=ɜA"L`؄Lӹy[1V'BյWצRy۞n7h |+ڳfkZןvt/lvtvW]=[宎^hQfQI7f:wF',cϑ4pct|j;v]<a븘9wr=C*a>~a_۾W_o'iu=tq-gy?/RZab:Ƨ0^~>[w_r6vPf! {n|/-wV>=Ѣ{O89&_,~xNI  mtJtSLFO':_^,'ggRS93$X҉V^if;"Nv_c^Vm囶DE M :*X(+~Ғ1X%,U˫ՓEn[ͰwS)O'0nOeػS㋜7VݤA_LUFe]3:i!ז:]}FT2 U"BJoH뮾4&&nV^nzӉ>;N`q\cYZjBBtDAGF.j.XR㛙d|fsMAȨkmcGؗ}h*`3{2 ^c!HvNr~u%[-;r˒N8nEvUdUq*9Skrg={Z̮5{ s (s@ހ{Y=TD?m-Pvf^ad<I³|!LE#56zՀp \UL)c;U5F<o=6#Z$K;>;k A3i[0)"AfbalJE8H7 ʄYaOQJisPu _!'Al8ٴ1I 踤ٮfRGec ~r#L"Δ "crY"65?if^6(ѠPJMr͓#E~}<{g- 0yCB[mhA(聢 !NÁEarv0nX@HElAeÄxÀ€' ,?EP%m>%R2fs &TxIaU(O.qM-< ,*F #Y5icF-SQ"`Af]^1R(S)WN]zRp=̶)+5P!DH3(w>Jѫ!FjYyUɡSL#-mE$j KjZ !Y71MgNNwgu4oϘrT4揺0ZW].SA)F@T<Sɐ]tϮ/<]_Hǵҡd>cVtP!Q`6'"%i(`DDSJ%;g6]-ֹ)JD#XT= ]0eݬyWBnaRVS&_}^N#^Dھ j=gGψĈb*]A̙Wi0|&\>b@MMOdȓѝ.idDDb1:JXl=X| JQu7MSY$1();SdPw!A] QC%r2t3;i~hMFWKҟή޾2:/ԝ>7TUkUn;T]0u bzzҮV}j+}3}\M.0EChʪ-R,Bx(2ץudC^6Y2T(\0(T! 6S^UvӥٰqƘk۫%6>qv'< @ԣ^C yXE@w/*~.N@mM E4 WGBj b:<ctCe[L LBF @EJB"W RY1)G!-)NiP75ʃbЪ&l/u;瑩'r2Tr2~g}xqԺiW4cjI >MéѠ䒌 2%$22I[3bl((H٧n+D'Dp&/1%Y0zv4%d EE%{iRlT% uޮy؂7W`ǓLb'>rMp+zfy fYw`guԬdZ\Th.K|ʼn")F2xd#9# jQr?RqsK[4b{;=,S:zU!ؘKDArm  Β+=|r;qσ\nbm[Qjg^Jg3<4wN*]9[BBsJD*?bF!ly7kQ|Yvj98۞1Q.quZ:gAzo롸k-҅Wth Jгk8T#RJ ga}GV[]C}DtW}gv7j Ya @eIfK" |Ul*3xn't׽@JOg-eh5O:?Tr̿ !K)B jhE5s>Y"eR@}Şy*Wt/fLeۤڤZ%%eg'ЉM 9Ji"gC`g,XG!܏Ljѿn" sBZ;y410pr=k5)_hrɓ̞?"T|\01כ{{?wf!fxd&'wrJ0pTCJ : IS?0~:b'xqY[$#֬]> ™6 ;K\A\6_w#n2>V-ʣ*}SN'd>'~H#o޼͘s؋Ⱥ}7M]C`H7]/V۱] ~%YxwG7;̣?--0-nn颭ڌ1ͬ*{6ĔG㙏Xgh}96-3ֶVlkXwruE2/m6>Ob)rR<ϗ,1Rj˗։`4}笎_~퟿~|Rw_'kVEǓpyFU޲iM𘦱^/e[ڽg>Věuf@Z8Q`놄5l ΚQ:Iˏl5$.b&_HEBŋ4s!b7 Ж藺nӒg38ҫc+Gz0Cu;ORbr P`R"'iQ@ZRKe? )h8,]^3?1 {]q]'g% R-wϓ{2 0(/`=,zR@8'okr5tkH֧9"y㫷,TD  V9{e9'=]CЦaS#Ɨ&DyFb[Q@ tNH` e}*(2#&{&>$-GGdwtLJ쿊 ZCv`ʱTSho nnW&#"xT~8!, I(J1,,VJ\۹]ۂs@ g"av}d[{^J'̭/ldvʯvuĘIŀT 4GHPP&F0y!(hP*3 %Z | N9ťoW12#XP3Hmfl ؞&B2  ,I3r7~Y 7h8_|匍PT$Nv)M#Pۼ `J@3Lmtd>{D%$ڤB:`^e) $:0Nƾab k&em2k{ v[83$+/D( kEaT5JL٨ aBg ,d 1+ k|Z'('@q $2G)HvV͇Q_KMx(ؚ|ܗEˌ(zFqă=Kqo1\j!؇(CC]F tl+bKFbVXA`Ir,i$53GkIkpfO/kϥCZlMJˋe^=/5BO.d:D`hT0F #B( ofw=/>/ &C23V#PXݝr"nۜzg~'W.| NA( 9 I+fBMZx”;=>#9<_ǷR&٧<[  ݵ<20;ϔpí),#;(> a0486c** gGa$˕3\XJ/= Rh#hl-B Έ ̀: p;3'2.&wK^[Ph ,]R)X'^mN\$ObES8pdq'vg2[z6RO<_0gC&^΢aAr(Zea:e0H `$yJ0Jrk: "BA 8yf'j, 8hՊ@D *9z=6DQxi4om``Zlm(I ^JQۖ p;Ȗ6H< *Fu+q$g!T*9RXy Wpo1RB@i K3k9POuOW\ήiDne3Q~>Mx_@{&2Me68RA |1NSQ1mjxMtV enp,G=bƌog~o*Mm]MU N+NZωtz-QzX UMP^Z9Fi0h~ fp_g٨ȄT8;UAK_Rgkp|,{MP$gR ǜZ#fDJB3FP"$+> 쟿|[n<տ W?;39-.n L4yyn.@D@>%^H ߎhzn#4]ş7^l c/>]UX}DRXUOw_ʈWkٷe2Z֙-5Fr@~Y&Wш\h  XyXoN~{vܠ7b>xwZ]{3i) `1M[ 6x2䔒a&z(*M]y=羾 ~vqlFߧ8L Lvv|-}6r/k'Ŗw*{G䫥[_Z t6v;*ȐaɅ SJ*2~[՘y)&Udr*v_ D+}WtgN}NOa2?(@H`Urg8*~rGeG?e¥s*Oޤ8Yx\}ЊGPPƩqQ b{RgŔ}p/ű٣AyhUu.kBRm0½FhUielUWJ`*]} T\3r!\0s,_̦OI (TeM ▱ o|k= O'EƆ/,Nf˱Ke^eU%F T t?ppwLrOmΌ6>3DcA=8{Lj\2y,![nu OyA=Kzyd4R{ɱ0Qs-9˝R 3ARew,WX4V ~MBp-c0E f+ivB%uAmpC/H0p:@tU6eZsS=sLr,2$ <0)^?}C OX`Np M'T:M'L4iZ=SC S [+@˱8tJ(=]R;DW xWҮUBJ/UDg*՝BU>/4R U,dg*UYJhtAO`Ky!ݛub4-(k,}}4pԑXmƥ!}(HOȘSŃGj!e< 뷗cd2]5B;Նf<Hr[K:$Uvگ3 v ˆ㋝, ~8+mwQSDƑl['aCkOq0fYw% cax9ia,씶׹AmquE>*a:rWwuj|]\? a<;Y m3 u L|ſLޖ)+7Rx_y1ThFVo):cm7oZ pu-Wy_Uo~PCofVihtڒVHn2~CjXQE+gl9. ]C>ݜᾮM-PPMp]^6VC,4GTC\lO''U3$_R4!AM*xPյyIL P-AibVx~e!Iaq86u-ހKM@H?{<.'#0%h2S0 f^0d>styx$'fQ7rz63 [sEl9(h28؀+{wfv2Y,/X;L25g?b2+"*5@gg 4;7!ie|ER N{0#F'\If5aMPW=^1'᱑cPGǏ /RCN&g{9W!O#}0ss6 {S/59vc{!qHrm!3==U^lT{4vhHhrG4i޷r7ބMYt.Ϧe_T}qmtBdOESݼkN&i4Wp 5]Ld&,wC&I'oHoUaN>P@J7OnНU0& \ݽ,:i=}DyUx?.*w ! j5 ptv^wǗemdꔌO4Ew_ BbnI T7r3=I t8x֢ib].~  uxYU[j})g;hݘ:tUcf7[qS6#A^?aGq"xK1obE@l0+3iAσu$gmͨiEybEŲ„XS.QI\BbVŎ@_ihXd9Fu"d+zGjcuyP>,FwfRB V m5ϑ*+w? ,X;dyƷ4D-|{F ABb[%$`8 if,$g,|V,ܘY&r;K}zLo'e#6rRnTt+P J#:mHg`6)l6*)M !9{D%$ؤB:`^e) $:pT;;N+b jgcQeFmvFK-Cc Gh"R 5"Lb0hDStW6*BV<\H☁ !fE;$`O}r G(`#6SdlƩޅ! b68EfDgD<#.S{"-<0FމHm ZEpR}1240"VF7QzHL8cS4(&LR#1s:ߎj XK&%ʌg\%4  R2c"A1=(SHAxDry&^4>sa68uf<3@XObd+ e~\s _eiޑo| ۂ` $/3&0䜓)鑜vQ=kzBƓ ΧI ?>ȟL-pLk9 V9Q:o@qBTŽ\Ω-@ rH# be}0uM A #ӀG!eLD:א<ָb6oJlm1){Ag5K7%X<ɨ[~*q_1gJzeLȃo N84ʣĠY!r'g`8G ! ^zspQ9=TtVփVtQ9b!/LqQ/*!)B@3Hp"}k Mm0TXP5f s&1J3Sﬠ`Ѵ" (lޙ"=Q aG+2q0`8v*iXGao?[3`'a4K_HCza)Qi?xdoˉoC GV)"a1C H2Sb?))JPό Jz*8ib.?Soc"j^{S7;5v8)(N{ $.L];7a^[  ɊN[lB!%:t&{4T>ղw15)ٌP] i%oWWf :Ni\JA=mM Gl<| c8?K|5hSWzO5ڄk|v1.X735~1*Nq۷Tv-@V}}~16ղs'!4PCb&Ʒt3T FjօeHմiAFi5;^YޔRJZF:V)^ANBבT1t锋? x4mcp *Jv lU{{뿾򫿦/x/5&x/`#"8.YwU/>jsU USlu+P&H}$^{ˍC[(F_߾_K<`3&[V&xbוnUL-בhB|78Ȕ dlZO,~gVyn<"a:gt )-r<} qO='p^ykaܽ`#`.73haDq)^eβTQ68*hs:n`ٝOYN*Nf'5u)4tq~Lg\]Ek% &uDhQhc2;SaLOBؒthW>bCzoSl7#^:v=V9L1T)A^iGj Y@SJx-`(젩IN|6&4qF (ȴr1m> ܠ Dώ[uѴ"唰4bFr͜EUEQّO-vכW j.Y||SuBePhj(PH)!?šnu]+- pdiYlLۣT t.F"؇@@%m Jkā-)\l$@9S&J|\ қ8_ ev;TlYEYP,#,3oW #^1A5{-Q|;7oOII4r^VZԸ U6 lʵ@YOnyvQmJ!U\XnZaqy6SUEUZB v:@*]d`9KyF̭KU[OJ@AľLz*k`FUdTCՀ5D(c֘8i*쪎b31)rc4dgk87qAmꦼbҢ{*KQpI4}"sCU> PC ÀW7W9pr8q/NVJΧ`:P7P7ɂ-y+{ <5pnǝlIE9p>j\e Zg\}Euft(IUJ[oK 5y:ej.HNr;֎& .,qgo<;eNӖ&Ͽzz% :\hcFL$ZasF䑲Hm7d4^;7+bʙm%\ʞረRJcbRv15ׇw¸lA\**4ՐZh"H U9bɊZ˾*"Y{RWq;u0\ѲQƝ/[oBEDWVB뭋^BT)AǶH䋦M:6r{0hDp Prkcd9kCvz#@MܽG}QZ@\j ML`/4<~Q-H\X;}!BEP,,9R-xX9C.ӵjj3+2y[x2_sdNQw^2 g>5wwoKm[f.0c_!g3>:BWcUh0t Җ"4p7^%wcĵ rRD:NCh@CJJ!nQń "Pkkaqhkd?~ '=(]or./>LYPzvC5Ną2sHŸA XVhFEQl![x='R"ıH@1145+%C1 T5wl_<ӋzW+ΟXyvbTW !LZ[:6wڮ,U[PۆBmNDnVURn-Dt tڃ* u{.E6Հ2JӵTNxOa J*:`N b&V\X<- NQ9`^bB8J^7-m׺ܻjC8_Pn"d-KY/ v:],x̀mJLT2j3tvpF I4X ΆyDf)q7G.>9=8|g~s07u TǺ!Ѓ『6\;%3$Kąj:ܮ&!v0 zUpgU+=IiTh,EC ̻cD\P+Ik̶&z%_-ˋ6VM}tNr<}'ؖIޓ5'rY6)3vwr٤' o|0we<~\1.ɏɷ[}kKAG/O,fh="9}Ƨ7V}Lg`큲(~3^L/2Dz&^Xq#D\{eCW>nx<&3DrD\T+\l=IiɽBOтݥ M`ڝz".*+IyUroRNN~)N`K=_uxyĞX>|iˁE95_\xWhkOOx\$*01yhD b,ipݱkqd/'e oڴ>fMUm*TckeŒZEsFq*o=j1$ޒA-@Ad ™ř,6mNYAoqI h#ԯ"U9:3fH"@@2#dхY:. uW2WzlͺKet_'4)4J._z/4 ~/˓'obt$`Wj$?;t.jp~RA=5zea~f=O ij ~ݨԈ-{h,leV3;ِl cU,VMQƱkԶ4yz|[|MW770r}7WpFËfl ۛy:ͥ/w_ B:$꼒tuy/,oq"BOj̣Wxt5WSZw^W]dUU+m gY>LWgLtp ;5mTN|t67_oz_}=eyp:>CZz4o׉Q0 ;(A(QTJ%%t\7;}}z7OTΔ5Ӊ5_ӰG ;q137?ݝXn*w&w9uZu,&D,x9dR,};RUdDUuEsv&=؇z5&瓤|Y<mΓ(Pf3=!jy0sڷ|؃w5I:GCdiy¥7br(`%P TOZb RBe(wD{_e WWueꈖ|JUX.G[Vزjun u=J0Lv=Hz7 J#w[vIE}wc;Jew|7Nu l,3ul萷V0g0jAC>ݨ^ O$ۧ.cuT!gEB%N#9hpo8ڣ˔'T1ID"6J Fe8M4LHʟpwm7; 7Rˉ݃BsB/EY,VO6hӓY–mXmxg3Tb)KP6! F`%!ʉ@<##2"uiQdTF'=81zb0H z.II8AQ+Tid,&nd,Uaa18 ya,d',<*o23>,R,7~Ӡrh8t&&MHTQ hZ`qi bi69 O9_O60½geJ%ж#&&lD8I p?[֓8Oưbbiǡ- 8nx/QR1˔!"@qFDj'@ꂦl2Ģx.xK'y̐ GEfBs<$ $ gڹxXLxXt1c_~0""qƣii$@rE*÷̫$=p [E=OLpN9!o yDpFz H@PI*H 0R%NBD# NSy2]wabuzA+qg=Rl7,Pj5G4A69K&( <Ӭ/6{XwGԢc?JLv (RTڲ./d!Tn=ha(B@T @øqVC HN(E#3h0f)& ̐?:U9@-;<DS\_]&@TӺAC96:+Q:BSEX[^?hLib,ܸ1[˽66pE8-{}8'ԎqN /柼RDPSWQ?orv|np7qEeunF.1:tr5e̓U_am׿,`G\42|ES=DӚ`H;"O<{M|ݛ <镻~箯_ s^< j`lk:+(UMpTË}@q~8BWPGmBd7k+7t/RY#_6ۿo߾{FomF '̲wtw4O{@oז^նP/AQ1$*e=O""k DKj+^91n:埓|y?'h3ՙ@a.j{ڟ!m/Nz5I 'a:x 5ˇBױ# wEK>4u˿O&AR\&{e;x~`qhPxxq^lҿ#К;R .wPCEj-N.t ^8* ~1Y&mlr]洋΢1u>'(֗QE. 1uQ.';W}Fv{ܦ¦U׽/-{P.&<M %}[ Yk[ڰ {=1gbZL)ˏUolTڣ0'S^dVhQJ(+t"d^VC|} [J{_d ?.GS_Lfo,X/ry_*SZBP]$L@f&e@p+Dp|)xPmpJf$ys :bκОLg2a S# p6QUpϟv^5̉*4԰\ v]'s./NzR^<'i KAEɢ&@)WL[-V\?c1x$,^΅n<jE3.Y r)+x*8duqOq`_]/G~= Y\-^s RF8,WNںN`JbN|*RY[ʼ`*P lfD(e7=#q;GzDH!Ϭ9w:ͬ& \RBiO^20 Bk$,b"I$5*Xi˵$"i%su[¼8{Ԅo Wד01=?< ݢ>4F.%ϋ2#4#OTlAg3o٫'c/N\H%3/gy[_򵼾>n_xque&?/;dǏw~~@]ߝ̫ WӋ9&<+tsR&2L̈́qW%'d9۫>e蕀^\,ܤӷnkcd=Qwd y 2Zh7]MQ4]ĮJ}J),eL0*M5ϐ@18֖GOl%[qLM5r2R#Y5icF-SQ"`Af]<zO@DtBȜJ$1E(E6B M'InEq% @ Ba]L$@((6z$ "›xED)qTzQjY&x7K |~vh3& [em>=ʁpc32=`iWO?=rAYmUVOac㞒6m$4I@!PH|ƣW98-BU/0Z)xØ̉jv mY=渾Nf$34qaß|)lVkAe!4.QfH;C'M ɛ$M. 8g.0wن_ff44[y>oGfO &tz1Qu'ɇԃ#<4#bKGũNy:!ъ1 1 aR <?EbLfS`vuudrrJk/U^uD(yVO b:W@2bOT" kXT AYc.}눽 b-Ve/z5?~}砇,799n+٧)b7Bml*ww'4wظ6L3pjP@YcBs$;$%M&&QQzlcpnPNV9 `c .Ut6P ym  Β+*;K5g8}$Wn~8~|kw{J |; Ωz*18g XH0rl PQ!PMɄ(wziXj]P ~2P^[Ko-'Uj͟d3]NOJ+ BWTVh:gHw^d6M|,ww)vqԎw2qqvܘ_: Mtݻ#/Xɬ yeɪDẀuE TF-ex V^Ԃg(G1yƐEY eӥHBΑ%^'zJ GW֯Ip j7u7w zu(~k[ޞT=aU.:%O;89=uɢLt^.:Q3(Xukc!'ʒ 0FZLVPY=Q%mtyg4Fe!yvw,S!Ɲv}sGDU=>v{α`wu:ouВSuP*3rrα[)Џ>׼89~cݹvˣ+ޘ'f y :G&_Og󦉇G&BT c ˆ)|{~[kmHo 47TV7TVʞ7T+GFp ]U'rkm+t$W0ҕ!n*`hGpѴBWC+F Btute ]UZ}ED(D>D"2]U6CW ׈fV ^(x9ӒݑN$pke+th=ORCWC'o n]mwZ#~j3j`+]zl*6JBW-UEn+%5CW ׈f誢WZP*=ҕvhyK>_q?._vS)1fq_6kZړ1ʡ t#]m;ұ'oWV誢UftUQjJ ! 5DW XJ ]U[v_ɛ4#]&]YRmѕ'Zatejj7t㝡Wpqu;Kim3y=hQ"#PǠQdOڤl(`ͪ-+*#rHT5αRmVw O'p_ck红w7s <+^8: @k!0ePR 2P %"H`Y* ~!>]Rrab ٳ|Qb)IX>TUf J} 37 Č6Z,b&` :`ԞW"P=P$m..90EqEx&P+/uZclDma xLs v,0:^-y98T,Ìi$ZCi2 G`93#0`Y iCP/jV ipF PHG՞&3t((q˶ %.ζ=p+@7/h J`m{5gC1I`U\|  RLYl' $Cd.H,9vQ@0[4&y]H] S D8ږ]ՖARܸ\7K7 "WgtTѺ+ u.,$׮zoxs L~l+qw ,,SNKSz/&&މrYgOvci'/n>tfZL zvk לڙ>Yttf~zx_omG|*0}[IKK gBEm6g-SOJcIt<> {D> D+Q |@"D> |@"D> |@"D> |@"D> |@"D> Y}@X͏lbGB%=e09#D> |@"D> |@"D> |@"D> |@"D> |@"D> =&vx|@LL }@R)Caf7ټdOVa>ϗf,r_61 jzd_.HrVrt-|I-{'*{[8:[~,^{_f\V[a9!]9-v17PZЉ%WboEf;d|gZ}Z x7Շ9ںSb[ 'ǤU̻6.Y;nAd-3(Tltd#;NGv:ӑtd#;NGv:ӑtd#;NGv:ӑtd#;NGv:ӑtd#;NGv:ӑtd#;NBt X#өjgXt-Zբֳo6vC?O"ӓ4v3v~3xan^*dIiͽogIQ լ:R{ۺzg8i,F_SE P+q d|' Sd{#: {:zHۢt$#^d]mu}X7&,Xֳ:>ޗ[EtY$YFuy}TgeEνݽX)Ʈ":%YժKonI#rfDx+讞vnK3/;߆Gg{~{mJæ=tK[ܬ&W墟tNgKeϗz1o̝n#Rrcq%,R `7Xu1[%:r>h}_&J׏X&s@e1Y}n]B]]~tN[ }B.KI\],.tmvS\z ?<.=>c olj.fK`]ET5`5w묽l)0 adϕ|F->/`Q=M6+`ܔ_Q:8u桄gu*㗣_6t(_ws|uVGR޵#"?{~Xego=,AH[YHrzVKLr Xj6YfUN\vus렣 ϐ#&A\7,W!muU$S9R)B%J,Wu4J:,N ڞ>[BPX9H׹,[SR$# hB= պ{T;kߓs䘑?_?)\w/hoN†dSk 6:BX`)!0 J%!(S-W@P`$c"МId$Δ{S)"\qJPJ{<#g7/Q׼Z}̜c S">oF; O; h"螾i msjhJQʋkȞ?()X_j#o/WПyxOt/z⦳r=P l6Gy'"gג)LNbbm Sr]Ɯ׮Ocp^alN(ֲ ̺GJ$H`/”|a$d 9UAEM!6n C_jkڇP[b+ϣT(e.EAǜDWWQ 3so;F*Hm! \i/ DH4t _7!I2 "K%'GV^L>rSL΄Wc7Kߖ|Z7 'o'7\'%I]եY.xm??԰R\yFMq".zrJfO.gԠ22QB9agcrv :txwg 8HtWV SEb:"%p\IyaKRuD0~-T]ojA2%/>^z|v3Wg@)BYثӣIY>$z{5n!qrRM}ѻLֽy6Lsscbz57Kfn]횁Yd.'w$jI#ݴ ZQ}pT(ZZ3n|??!ݽ݌s#vMԊteq=,nAo]TUp1!mY~ȧP-I6#clՑvfnדqO7'Wd"9V)ſ!'A-D&!JNOziBU5vFp@msދۺ`%)HT0-3(= t:iݎ|[=PlMܳ5- gԏ59ssӝwY*<'q`9y j ^Tv&P2>f6L R!fϢbs&MEo>-MJ5yϋ cl{ϡjs{Ԑҧcl'\Yt 19$k|۟6a@5 5550G(^J8י 꼹ϸgCqBٽagMf.i\KNbhIz= P7Mjf-nدpINh4۝<6 r.wYreIuIduZ=^c)_ۛȭMGȺ []Vkw{h݆}Rb [ϟ[WMt;:]g[Vز6z^5<=ܾr£WWϼϚG%֛[wt<׳ N;r}r\_4x%n^k%?tatnRmmPZt6MG9hDۗDVZIr+o\$JIP3} %8'HO#SŸ&)&+%%!/K Ihi"$ pO7mW[D) zu\鷃Mrq Zw<`!Z -W gc`#lfAeٙeG"HJp]D Qxdl51tRfx5 ,vS(N򰸏Ja͆fO7@VEwMYXh#PD(Dˉ@-0CJA%9YRbNGo)Ęǣ PV"ZCIe*&)-Xrg뙈**y4!&̈́NScNJ["~=slVɡr\ܚo+Z a b2BbNd#K;x x*8T@#o-| ~gȤϟcTJG}%U ZVFXQ bVwcfKr6k*D )[=ɢLCihp8S]UTUWU5)d}N¼jX䬞#iS{vgªʙEi,O,yLrc$[fF8+ϘDOӏ焈0 ӆl( oͶs 3bp~A-!gT?9~؜C5 .׮푤~۶|Ζkv"iX<2gOן|˪׳v]2 hOdJ@vh|IPJ+*oƓokoƭzcOOc,OmbZ&JuU%w vl}v$7.(LZ][=w\M#3dpۈ6nsIjRY&h[!,턷yb4ԋ9;t<>Nц7-ifn"8}xY[CVߓʧRYޔ7i^$vJ: uHsGQ;bڕףctwJPU gx]WP9f/AJ޸ha6dT6D+g%qpHi4Wz(!Nn7mi4;RIW4[N~d!g'%xs  y&EC kilDtVb&e{p9.SIU޺0smfTeůvYlj.ԁq: ;' _"Xjeeᓑ?R_Sv͠Ub΃04p>LHʾ?f ? c}EՍRYB1peU'ւۻOh*z mDMO/wvKu8 m_+`mn=m6o=z"%P]k3OmzuyDd"R A6,C&=j B^'߉`غ(UwR؏<ԃnؽ—A35hV\xwk@^)gpDCFJh{!c&'( 1[ S+@XAOQIt68mٹtb&hM'2a\n9nNLE#8F⃴{éQĤ64QK?3a:WvNl>r6;kp[Kr/xyRgkEtƘaN(sR&a@N2{!7KBd4ǔbSZN\.EZ>9[8p>t1)~,`wd8"L76?_|+9/*Gosшv)k4" Ql*T2g:є[ja.%WmK8𥒑+kzYƶFΖK\RR`' Ag.ԫP>{Tij39fIA/_P@*tB'9!Q I{_(Z8)nf-*w&(tJ$}2ViPXɻC*{5?f@Pa) beC26I(enΞ]V> ݁OԒ]_d21mut&@-6YIo{W7&T ӻ^saNo7X7>v9ދJV1k2MRVRAY*rVok,<>޲'EX u/M O֍}ܺ|,~lb quüb\gK_j7u 5u+\n&L WͲ\o&u*ګ޻"eX}xAd|zyR5'FHvŴqNN7}VB8QІc NJɊkLD-xv-j;QhyQ JD" 2}L$\^b TTLT:[D˽'=OB8B>IjY er*)AQԻ(YVyDv* `sA{, XgPXA & ei/ޓhE^jrR1ǴIZ*Ԍ6D\yA٠X% QR/,%UD)ƑX7y=g`"d:P9_8i4 7⎨9^&8 ]4{Š1N9+r(J K<{#e(VN]^sn&o22YJ/.0[A}/3Ղsp<;|_`c'萩vufX3p^jR7Wl\.uNsL"~4BP2dBm#$D\dBgLUbT} 6|3jPSo+ ϖ}YCky޾y"})ZohZwy|7-X12K &Ȗ$!$P?F[(ZԞ <,Ÿ, UALWj-&O)-2Hr=ƥW aDž :|LȌ (/eRYr/ Ozi*#Ew6ʇ p֮N]t#^8\0'K#_w߼ڪk>+)ve7i"h ޓ'RHQT\:"Ob{,jw*' 1DjW+$}*8W9V:L3:?^klfqg.+Δ8?O?~QM׳[#" qLї1CQwQYrxrߒ%WU\Rޒ{2`1UEOEBdǢ v]]*M١=BG E]!v^]*3TW5d "@U!1Ԯ: += zu|(r?˲:kW]ɴ%~Y|R.^]rvr~|;"˱2/P…640'J'O$5tC䋪r%d8='A=.TxoU7a7.nmFpR䠼:Qmt(ӝ \]NSCЎ.j[fuoʱ}K|Zw[k!{<"hE9 `JRmu5ͤ}o_WKxS\|gԿmY[>iQGBq&pU}s { qB},R!bow69m4"2SiY"ڰY(^n6\jfeZb /n6ļs! TDcYd$O)DD]G⌉;~ݻnt "ynz >4ߣN\9]dh~]{;|vٽl-,Xzޛ&!zZv-ۥ}`krPi}5xp+--A*q<9{(2y ,% "&)3HI Z* *xOD\Y6 Q<[35($5Gd Ka%U Ei; +~XUpMAy7ǎLw-n!m{tDH #.eFIԃQqX{juHrI򱐂P!:eH,K(cBΙ)޵#"?{ݶY|6Y`lfo"cȒFxݯݒGK܎<Rj6Ǫb=X@+5" H+T QݢX$;>5O9Tzui[vᏢ3zkqCOfNe+?yIJ2:]qkE%= UW,# Xvvi'~XOֿO`aeRkBJhT@"HI>DwZc^ ECPY:iWsZ-:is>L7L/ Z d\ItiJh}CfWb226HKKWKt)3i]2YǠrdLD?dXn>y(1}bT*CKǮv}WNQTz3\B(t.Yt.`

yv#rJ=(5 ( &a%kwVgڡ烫q1KL>IURNDH2~JҨ0|cb{}SJms*yVC &w݋g^2  @0~2e' I?0>//OUV%]K'tP&2ˇ%u+r&dߵ\*JE`[f 2CNJ׃%oN[1b:$S+rC$vhٻQIom=h}Tk|W_xuhx.'F9anh[nv@BKmբ~}ǫ7$ӝ#IH]af}!F4E3Xfz91ņtQ5j׳m ̋?-y hy2_ XO/d7tiMl8n.pzAw?ͻo_o7/?|շo^wo{Eh)ЊṆIۣ!݇V4rha.C>mAckoU[k|էWcpKדi O=aͮIZ]8-D6媞tl,c%bIO M:tD"=,uc5mNğ6[I(E4f12Ip-40qI-F/mpXB^Us/ht.-^yKN;jɀiz$=afF*oy(C^j8춮iq5}X\[Cok▭liX?\H9;wӝwY*w:yXutyGr0A ^LjB/%58QzjB8zR_=a3sVy5˧?U{QKjRL*奨FU'+%ICED1j창'yO*v8ޓ{1jpLQ 2qKY S^I[Exj‚6sBҚHu*QR A>R^3DF9x'SAu*mopm)*шe/H2Hdi1) 2 KрcL2q i6j>;r*")Z6:wEogĶ.OG+QW6$SA5ޖ{BKu:EA-^a2OvZP%c(hM{bQQaqj=s>"4틜p){"k% 獼D߰hMyhMpIEM(QLr#hSΞuB )ɗ Ei":y5I}bt^,3<鑅x]0GaEJPz{Z̗Q:\_%^c)mLFbVM}l%v;i uuմ=MxU->ðy6eE-vM흷z^ky=W7<.kyXonqxe9מz)ji.t؎m6_2w?1*}Woux0`|gXڄ$-mӏ D}I p#8oXX*.l|b%<~ID/a$Rh. )Ĕ>ƨqAkɁ9NfcR2odNh|f( }4zc>7/Gv|.ۤv!x]C"BR xv%qľvdF$0y!&I_X+Afcծ"g;:+ivtanK%w{D͆>"l1ZXI$ mt1ed*Q3Z[Oy]+2jk2bPZ,=)hG,IA O@od썜؟2B3#>):Y\a;fB}x&G+Glc1Y)HSb$aHHo +rS=66Qˠ%duKNr$9͖}dk*桠voc_Ԗ=`V"LDƉ6#1AUf%a(*{[tѽE&79Q޽oGǚYwd؃cP)B%Q>Uc-DN+cCy7tr%Lո`Zn8]M>~2BDy"ANIuExJ'%1F,*m2A%JJ X )*֛!IK Cz6h*/mY4kN<&ԩf{K#_..~>SK1ôɂ`n11$0y+9+!r.$PrpgOh%ʙN:V13˂6a2%F~ `d:"J2C@Μ;phSp<' `(0 y9ہa=݁~$/vG=[b{6nJv;xj0ʁ'j01ic" D䒩Wq%!fNyYDpldq=O?3-.=sD ydVG*QGÕV9 sY6ƘMJ,XL$yzA"8*Y@?ZgF\ nyFwkhaD!xfPd޸z[]>" m#계5aN2D$/⅁F mֽƕ7B~(R)pǠJPB9Iΐ@G%M U300WP(%&IY\X`Q)l 05$nI/F'\EeM ͆`ـ}}&@TKAC9\6d:-F5/z"vgRpCj?&z=z{]\fN[N^ωos:SxrR 5Z[)~+<i nv4\Ug[faS>&Q])Ѽo%rVO'i).O 벿=[dy%J$7 X;#o=9凷ROX&rի;.̰㖶bvmwK w,R`6*0mXtڼ&MYhs(RE`zoxozw\WSauku?-ً$5TUHsGQ9&UN{_gGYdno2uA>cF8fA.96JцhEBW8e#QAu EoH4N-.콮3↣T/n D{"J19E;(̝N¼uIU+\V~%Z:`^Im3%BOX3)z`b` ؤΉkYk״Ta9a cK4DcY%B) FE b`~o7gD ~[]K.>fq4jnYbhnJ 7 dd *ߕOpYnAS|)moT Ynk \ah+HkOH(C cӲ 4%2 =J R A҉Pe'cbWk. 6d;~L <hߠY ݤZi23ή3*ljڇRE"mp&v1$>{>vkP1#hfO.…i(GmËD?_)Lg@rNQ^U W%g7$1}} I0lj+ɻ>N%HBI KnwuQI@[-:ϊ8 o}u\WG pdPz$E⹻Vhs*n_mmԝ薣"J7)܋v~UwPYQIE?߾f/;fy|imW*/|Q -.R;߼l}^ͮgϨgƲ竿1x :-;V{EIpӗK P h+g'JMDj&]>c}oX'QX̡:KidFd͵ f9@me4pq*AdUf (5I,p*H] W`JuxKYM]>}j,+hJPO/v{kUS{ fp^34ZvWo4D7k{,bR0Dlj6%kN"&Xcwh^<{h@;8[NȥT3\l( Rkj-Ix{6:hZhU.*KBd4ǔvF*qau;;aj^R#}$NW}R$Hra^1]מ~W~\B>z}ZʥS4clPU!˜DmLDƻ-$A K +BRxcΠT8ULFT裶,GKcוj;gڷ!HVy+%x1@Ѭd *k&f(cLѥ+g*U3EmAn 3 ] S=!0vǙ2(o;ϥ R$扡})Z(Fߒ8l6y mڒ*ӭ!:[ײwTxkl 2K%s8ԇl2S%87gq;!E8+nG\ 6!uKfPƯ/-/x: oT1)Q AMA$T6Saq.2NERy7Aoaߴp@/_t/ZpŮ|܁ofOR8כKn1?c^ (bdDfɖ$!$ݤQ@(+COm_t`dṵ)RBubFoJr!Ivç :묭͒8Y<<+[CeO\+ME#v&54Y-=eP<5˩wVi}}| sO*e1ɡ`䊸 Jk(;\r$bRW(0ÃQWE\ ]],,u<qM~Y`倯eJ^MkbVr h:I[ټ?j2~WQL`&(æT3\^s4]\_MP6SΗ*=x^I*E`~q-;*fn?OɌ?l|:Nf"u/^$Xriqs#LdxdɽNfF ޞX_&B)*kR6SJgYF,2ZE:=CY;ݞqϠ"j&+W|ח'+R ,!j?$v[z PG|T~'ǣKZYnGGAS1~h6/8EqSsL3xmQK,{v mQ]AܖeYes?iFx:Q*D:w_rϮVΕK(\C.|@+lF@77_$|2]Tʑ3%?a8BeL/yŽjq[=m;ՙ*lYmR)x/ % 7PswQn6ِ`i] 闯 `Z7ύ]5e&C+"qWk6?>%Իv+hVQpR"zkH•4Rg]A1l2'sk+Yv+Hi.~2qѕpH]ys*_K42U[>*NRʅS.E2Ҷw M= LF[!N$U^x"JFNY AHRVa SAٶ30{-#cԀ(%]]=w4h1H6Kd }(HOȘSŃGOL ޼t|*^êiC7^/gYI0[@ն<&]r;|Z4/l|51 YisWEl%|+Qt{x7+t~6st/;e K|_,;_L@X ll&vwȇ,Z7<ߣ9߬&J=\ǒ ^sڳɿqHg{jSw+&YaWjUt?i!JP:?.۫V;*nBBJ _W5/ڧۏ. fTٝ pɪ5EwhfoM2h>W`z)%;ic m +#[ڌO3 Dm|{$1k#Q}rV+[0: 'mGQX|ay8 B%!8#.Ky` e}*(2#ž p7mW7@1B?T_NTe1YkhNYQL9֔"\4y8- ݺ|1$5AZX7/ qfa$ ?UaaR"vꊜ5\Mar s_C-Ɋm_Jߌ9 3=UؘٛIT 4GHPP&D0y!0@(h-JIR &arK%bdF/+f"T9;Y3,3B1>*LgI2#~\㎼~ FjGl䜥ܨW. G&uڐg@LlRmUR2f P42OP` yIt;t$L@avRu-+rv# y,wڬcf'>I8216 qXyA&B!`X,$Ì Q=.FP#88f!dhю XE0 n#ȁ:ngìԷ1{0 "v" OAԞH Bw"46 A""c2eh`wZaP%K`9XG!1p& ",iP΁&MFbR3rv#.V1:;CqQu'\4  R2c"0b*zNq#!RLgi3;eN.>. vtнIssw~Bޏ\#>WT|w`[rlɐb&$Ãw#9=#EH9უtމe< r tLCpLk9 V9NI9!*H˃餵6SO 8)?F#/Jǔ&mcз9r,=,F^޿d׿|F)SdnHa)D\y #WGVyf'DxDHܰ NF.,@k)4:! A YE@BrLgDfڥ'XAx链@w{]זJ&tXik! ,]RN0"3HX#j5\(Yr=O73L z6B y~kÜQJOy.gxI8*Ŗ RzwE "R(I<]%$KC9Z'_9YO8XPq+) Ur"T{l*hNY0&ɺ*Zl]IEԃ5M`l";4BMԽ8s#k! 1αi'Y YFS>Z@ŤUN0p?PP1e]Ha97\YýH Q'[frDO L`Pc^dgrhr&9 Elhsו_?OWk~,q?,ۣa[% *Q mA}b;cYB+׳Gh98^/Fi{}{1,6hy]G;w\Gؼ~Lʤ&L?, $#7^]8K{*zL4_@EU|= COoǡ7tZ\y" !U4&nhh:2f`.׶r2' B/Ĥ]/F= f0H9bfaŽn}z}ACEXm Ih,',vD_-qbGiIm#76o{l`bmʠo65&Lj6ir$Ģӭ|N0}2jxoF@z}Yd8y?lg܆h"4BGKH Q@1H텡m"aRٓ}>o#<9`QS>q4^EE4v(%yN %tt6> ]@[#^/(J)@_(Ѕ䄓]3"͞0yzYq;_IK6phx0Gq'G|Q+Un<6@9H0h9ʤ0x#%j w9fVD1fFY}7'JHX' <@Qc(h67k从9{˞s\9H[2'J%FobKFWĴvo,+5,:*r!U2b(>̪{ja)MG),cZ/"[.6sCK7JWoY{󐆠yGDJNcRɜRgQiƝVc 3"<n({|G&* EQ!@=d-cA A|G ‰Wgi+6/ÄSaiB8ט)pH+UXLFSZQҝ;)t;*\iϐ{ñLTI@< k \{I:XB1s- sDpagM G\.7]i1z#PLB HROll?ָ_S4q{P='a;-yJuQ. 09y5wdX22 Qwn*y59]6J`pVtb ք/K`^'oa@M㘚p԰NKgAax-TO~ d iF Fj#{2:§ӚU l_5@TMfV\);JE]<2~<Φ͍e>1\S?u=T/v@6q|xuqmw -1$6n65C66#feHQƨQ.Vppthӟn霐rcmoV7U S%{hHj0Tu:25E?6{8*JT*'QW֏ _;__/ϾOͿz/`\gOa48X7 ?>)k.~ܿijk4̀Ѯ@]vyCK_9+}7_[q_p]ٲX5A+g\}WËY[N;;vMB/査B +qt㙝ĬJғȣ`IY>QXRbi&RzbR[$hV@\'}nC ye==FÑ;@WZ`Ygw/Hp0 zd(b}2&W{Y~:GhOhGFP:laݝ#wݙo*otݹ亳F:|5u(o0}Wfu&Xڒ-Jm|,ggr!Sš5Jp>g+YH\#Q:~=vF.DgZ{Jr_)zUFL" mA=mCiɏ =nZRKt[ݒˀen{I!&舌)bUhKU s+r#_ b}nCUWם;u_:՝E{y}J7;;< Gx6??jڂB]illfG1ŖjApٗ9RBo$/\\ֿy.Ok~!fB_0gs~7`AhVqqgpc1Wް;vwOy)SA^/yߟ"0km.Gblဟfi|˟iҺ5s4).z*v1k'oO;[v6_|Ĺ?WyaY兾Lt_(nгSqs M-?x9v@ehΧ/^\׏;?/^|Y|gG~9?;}(_YLhXiv57S6~iv9(rvC/=[e荓?iK?9KVA]UĬU^7 /A4?JACo>ZEb|cru{z%OƠѶXjrbAǣi o:Է;}мA67ӅkNib2յzRDd2QkSK+8#O}Ds*Ds(@&[ dn h@R=$J3r.[asQƲcEȘ:vS\kKBњf%((okҭ&=i$WGӭ)jJA_ )wg>†l_[CٻUyoi\mH{R8սF U2 Rw~=*/zRrW"dst@)(@H`"l|hkbSΑ{Y{g-Eۢi dvM[&&>9_2fWcv6Z`n`DElu60fS kScĹ#VKŞDZF^}_NyՑNZ/M1 &:mC̹U h3G1N1%fdȪv{GeX-Ǚ[WOJAAjĿT̾ z nb VUbgػbN?+ [,_M0R2դb0s6T79)kg8s2q@ĦȒ1^k 2.R7e5`}njU|7T]0 !2벆S}ފק ^ͤBVRV=oW*n' =g Ooӣ'vxbU{!|cv }BCLaMvY7.:`$`ѭJdIS) kfb981GYՌ>MØJb=" WbTeFB㊹)7!6Y }YT[u),+2[0N ODݗUBvZܬmx?-:%ꈁR6b& jNh}16UL 11NR>ڮFvaѥcߒνwb܄67-ą.8ɝ„ܨac[dcmUMί[ɠL:0͚^$.0$R&uiIr]ʫUcw3IsWn ~)K [﮺ ᮾ7w%4\ |lqtsw>\JzP]^˯WsKkd0פ@T*CP]r `/ڻ.{#_=`E }GX/vw*b 'InՐt8qB5 3^q嘆x!5ٿv58P;=՟cߍ].oޞI=rWBOx?5Wuδ=Wǿ[bC{ccn~!_#o1b'A8y Oyz䉃yԾw_=垤pl;e .MނoG|<&-tD1F-+\5F.VbSF[^F]eFje3 i.YfĦQc Vfr.Qu.uF[aR|:8 hצƈs?F<+A=DZF^}AtwZ]z| !ޠb.S-ѐO19fdȪv{GeL\TACFpuu5dlD Rk$bUcu[BjL {[ZGw`DV3q)FJT fΆ*fuX"}#eM>BV]gN&-֠ˋ!8ߌ.O?+׳tKL*Gy MY Xy@)* U%j=1nz!lv䬰e@2Ar hrJj8]l"0:HTY^.28FȽ#l6._^Ȋt@ A@d! /(dхwS`)14őHhTRTa ؁Ffh)Rt` :6FC`Ab+x+P$3ikmgwvmV|lWwvWq #Б,'GmcZHRIMdwSjUP4J5UhSB^Mzt\KB00{ҝh .qRŦ[3xt\+# 5 M{Pl ǎd廾.~+mɃݶ+zSB=z ԣ7ZkD؇Ӯm'*}3~\WF%r N*dn35f&,{MXjHmH!sM &6lIO GF$=׽6eV^ҝND3:ZN;1-m(u"8Ϧk>1 ^_"kQFSt4@烏1* YG lew#I6Gwvд541rN58c:R҆,q&b6(ӎ`+ރ 6ڜTjlv`}om+ -Z-i!N&gRxvW{RIVهgBܼ޿xzZEO\CP݌&ˏ?k-\L|"T%zS)&&QX4jJdqeޙ7W%EXD9i([j5+mH_e4R߇ୣu $yF WHl!iHI=Nl3_WUWWQB2mWm|x8Ʀ{/V9K'0*eβTgYkݲ^ u* {>`ɴ*1+>md8RclNSmq XlDZ-1o}-wG-w}2ୡPJ1^GǍ%:$]drhdNZ*;+-+>ɶ}e[֯4K|I*{ci"X5ANq$ NiGHvl[Ie;׺Zu/dKch}m:mS-Jk\WmVvKg>)gS~?r)~:S~?*د:uUW|WO ǩS CR.hcK);u${O83L:˨NN(:& T\ĉ8DKr**3*yºU'iWijǮ5ײZkνj8s%#栠IƬb`zְ1J%'DkRDx˅1n=R /:\fn,"()TD'A5,*͸Jb,A"AI$X{A},>aJ-+|ncFI&#ŸaۮN  -wG;nLnIrCH>dD0:<$d.mp2< }WJ`pVtb !Y$\_='_ÄY+,qQL84>dp} @RwKNjե^ P=CU+G G5qmk,!a|P a4[yuV6OpSRySx5^TNpMb/ {R% ٴHpraw #1~HMÐahfYރ 03f1AbeoDQ Z?jM6=R/ gv$C-#i`pR׳ W>ٰܲciU`S<egAy;s듿y}o}s3L'g߼U0RFIApk9i9nkh*мͧ&|qjƽ%>/ ZOϯoK 'QSlҦŪYFi | HonP s:n@Siߦ!]1H3T7Iq_7n'yfV#FaIsyZRL$WHQo`XaY!!Yg'=á bx츯^ykeXܽ`#`.73xˆ$FNQxjoَ'a:=}O֞=[yJgqj!;{_ъч7:EޡVui8aQ#Va`Ϲa4gD0;HUPW}~3#D#WPYpb0MBDpRTU^xP 4AV*0Ҕ#1bVa S띱Vc@Q{-#ch0%e5rZ#R4Q`Ĩ#ڤT80.=%P =hw[s^bʠyȐ6KښD !lo8eAZ`yz@I);&cW/P)Rv|"iEFcÙuN`xrȕ 2)Mj.]^ *~?$fմi i (&P|}='Ϊ{}(TSPBS.'=Le̽Ij@_/teCAF tsL9"!h6u^^KzO4Pfӕaer n-x2ɬYuqோbkNx sh]<jw9ltfCfϮ͢ChC˼]7z^4yujhZ燔𨹿z_♷y\ X/*_,#Xz5s֬nFáߪעS;+t(S-a:O3yڸxf||~@>m$"cm$ʽOUN)&  9x=:#1ǖbAD1DgG焔 F,sTAƐ,sdo}Jevl9ש|._h 23YkhNYQL9֔"*k*<-qZۭ̐$vD9N`.0 B.ꕀZ7Naa<< [w6EkؘٛIT 4GHPP&D0y!(`P*2 %Z | N9ťoW12#XP3@mdl؞b!m I+9YҌ-7t@O'q r?{W9KQ\E%(AAY~3:x5Ϳ d7G(b5 o6F7/?$ ;. r7Cꅼ}w!rxQ8:x {.M!5IoUL8>y8oRbm@/fp)ulGn7')ҳ>H@s-#KovRpa1Io`.{uq1 @4_%߾ba 7[~I1jwF>.ÚRGBelFŧ/ctysZʫ_CqEҔM)*$Yٕ%u0A[eü3hުǴ`LJ6r'sol^wqoiud'"Vtޘ]1+'(GE:nKl}x 6F)L:ZF(BAh/ S r v^V=__PK[a-lv)J[|D"΃&P*ȭ"v^@ D(GKׁ^+8|/=nzc>A0q~QRd'Yvg\{7Wc`zW]ݝ |R7Hr'=AK=n[6g&%JOEMF2:KnTt}շp6_VOy/'^XMꉱUR*M=vx&c7O|Ldˋu= Ѓ Gto/՟~UW4嗏W|W/3>&U(3|4ۯeOݬ~0vx7Ʀvq-l<[\1ҕ]\tEbQz=誇–Ԛ{6"vXs]WD)ݠ> W]ؓyvS_7OSHݔ#̨0 I7ՃuL|zUɮ6V뢥L c!}[^0N2{{3/.Mbi\8{bZg10bv{5鐮ҥ}E[ɇV#܈ǼH?]_ƪeu/.ՅT6iJ+i&)!(KWaR6|RniMjdRIJPIY(c^%kJ%MB[Y9^Je ޚAee2Ǩ vn.19ʪ0Nl}ˌb_TL"1V$䇿"MCK,+TDuc#c^4?G!X%GشҒ#ZboZr}lYDsz|6DkGC?y/u 2ҕN`ġtErњz+(N"`p+=CuEz`5Ԯ\tEJĮ+l4]GW`lxbJh>}Wt5(0$,z +;xvUBWWhՙxv42.]A]]^Z ltE iE+l ꏮr]pS"\\tRuE] :虰st(v(]2\tEFĮ+zUuVK8T+ 4&eIb:){0dI'm h3$^R֦iU@dJ]dTD9Ħ?*c RK:pe7 0(˦@M Ѻ9D^>s ,#]3ltE^qBPC3̆pO} ]ѭ@uEꡮWVq] z+EWD ?GJ'Ԡ*xg4[p+2v]AW=x#zVp-ED]% }WX6EOǍD]+RꜺꌫYLkwՙ24j}젫]^`<0ҕ'Y."Z )AW=ԕia+vװѺ2AW=-f4#]ltEAqт]WЃz+,ĻRvk@MN 3nb%K. %d,Ű:}-UH+ 6  ^Z˨%J6-9lZrDk]-9?zؒsBXVO`+6֟e ΔzUuu+pM i~QaBuE[*[HWc+ՒF+tvUuJ:IWhEWD aH+ٲHpv]uάnc<3 Fe\t%]ZRSL=X FW+]YL͠Jif+6|tEVsт]WHck :e銀c+LBHAW=Xh-O)p0; Z:]cjLk'5bU=uvtk Sh'fSo \#֩ Dx1So:`2[FWt"-uE ?誇rF6wq_W |:IEWDkDO~D1ᤍ}u p5 Z]WD]QW❀Kw=hQaDQuVvG耺"\\tEbR:]}6R-^ w ؟{%np`Z#@vԑ(Rt]ZO<[FW]&v ꡮ aQtA6"\chϵAK7`]QW ucMPM=~kU-7SL0#W0u_}:2OYH!S&"U›?\|W}A6O~<_rGmxy1GjOE=E8JT9m D *}2h_5n}I+p7!-l 2 ?&[To|GWM~Ԗ"dZ4JW첚']W] W՟ (Xa?fF?ym_KU3p['a{̨]m \lHMnUMNbWiZ$_~B8)gI7tx.ѵmI$Ż*/-V/7팀6i#z˥6F7R ^4(ӵ1%ão`6JhLmչrݾlFuͪUԩf. lϕ߼*0[3gUTfiV(t_+GTU~6;[I >YRڜ{O&,;h8n5]7Xd7N9J=\/^n\.Vu5wpƘkլ}M-rõk[#Fۜh}j.Zf2Or.!Yzϸc|$ٍ1vl,|}tyu}.(E6}Ku}t>!k;ǶhuAI#6,rW';C(kPpP~FPO8x?ec@F^!ܨ>5 ~6."OߡdC& q޿[/5ۮ M'\qym6^2[dezև/Gm}vb*:^< vIs>I^>|(A '{8|ziN:y_]!|tEh]WDibA`8pO:{/]c/yRS+)NW`+Ձ_7 )0 A é1RYph]A Cc}pejGJm[AW{J,z=UwVlϽh7\ϫnL tt;Jڵ%M# X FW](tC])I 0ltEЍֈuENꡮ4X806#iW1O u$tR\xoŽ(6 TTzkSN*V|D=9l' {'ҋRxvH+7nRRJgN/nJl .&coD1Y|t4   Op+ w"0z+/|s uyEM'2*cQƶ*H)N:@o_]pmgDJEWDU"J3QW NB`<'fD:bQ*7ѕiYf$h7;8nٍ֜in7JL]AWTX7tEltFY."Z2AW=ԕRVD0+ V ]WDx8]ٖEoG+?;nOD;Ѻs5QƦ+AWvծE/  I A*RAW=ԕ3ҕ[PWk4ps'Չw5$6ht6"ܠ i^WD͠> z2'e[N:kI'Q'> V΅X7hQD cR3Ϊ; ^\tEƮ+?誇r]!WK6ϼVuEv饮 F!GW ]P2EDՠ>.X=f v{w[";F),yryuE)F-;e'&bŻՁɛ޼y m\sk 0i/7<~T:DbypA#t/v[]nW1dvLM@\S嵼JS7hD+Ey]PFG( ={lr8ݢ?CГx3$UZV_N˔uL0 EaRItYY\"d,IWL|+Og^/]b=1>*>Uo/xkͪ] o ,,)_׸&sjBW$m!Z>hgr[ zgfhCv:(s*r#CE 2+Y)4=WRgy^jRˊ]?Ʋa[fKumABCz3Ѯ(K㫉(1 `+Pz 淳&">Tei fJk HqUUF "ABUXJ:/ , 4 QH=)e63\Ah5 ֨dbmQJ;?BJM&Bdiu5)[y&5EjK!Ō8QIfbNlF煚LktX[,L!"2|3-1 =חrKe15JcjNFi&Ubt),*4+MA.:+']nǖ˜I[uL A]c4)xYH:6%nGjkۀeҽjkwתmv u >:D!;z]F,]lD~^sy$IH/ZƩJj[+s!$+F[t΁z9)!K3 V]R"vLMZ%9įZ)|0 t mDeivLVCJ Vj/ "$8dms)T.WP$X'5/]FT,CD ɞ5)0sS`w"R= V@FY)dWH! Q SI׆R ӌT"yˌwh|`L9&X3rVb{UPQAm> h-͡]KvZqLڏJ$[PyU벫ԕ8Yɠ-],@8քBGW):*B̆f;uF5`keB`T^# LMO#)ٰ֞m1*T*JvdZ2ɷV&J9ǥ [ljU!RcfQ1LhHp$8I#`AV$*(+WjB&uT_PN+$_,l*!+u z\E[PB]Q[=ಬR uWh%Wk@ e z2 Da(!$( ""*f"HMwآDy>UFݚ 84鴐A0(Wtrnvߋq)tŬ䤡bLbb󢐴t8!bE!v>ؾ3P׫u./h<~kW f=fM&>Hh"X 3 o bαP< *}ttU2Mt%C RbtLECNZˌ %tF\8gP4I"i Yk2!`(`tx L}辬$kIu u<o 7tXTu~T% 9ՠhKP-"V1hFy¶e@TDi*B%nr>;{. O&!z*4ʘ}t6A@F4= R==ŨuԆHT—Pw}CD{]k ) 1E-`JQ%PKcEX A%DEWHPl5CkHhGՌ6!XX-;Lǡ~tAa8J"(YC Ʈi6 3*I@d(.fWbd@BUq@ơ""3TE תE¨2,,HcF8&gو.BPmʉ6Bk?@ڳF& h%A J"-Ei*5.-zrDu? 2QIX@iT_l:hIYTtcV-6`=A;Wbhi>lnӮ.\GIȳ`֣;V$LC'Kac+i0˿c'QEŨն[SQt5(U/ yHY=yh4vM Ƥ<קGFзUfĞT25n(!/[tCۚb樇 tyB1C[uŹEYf1uA;(XfhTYڂDeVǏZ"#8,+P!>n/w`E^QH"NdviP'7 B`~?E 6èIP(EeQAR܌EEH,{aOg=uc[sOҕȪTqQc@sjo6i]0s0݁FzPIQ{/AFL(X*-lF]Ρ~]뜼<ݳNKqA^ Q%D7lajlJ!@6(=|q Vp*-]QZش =W&Eiq#f0B5Fb> }@b> }@b> }@b> }@b> }@b> }> #Q Nf:'6N }@b> }@b> }@b> }@b> }@b>z|@t|@L}@ODk-> }@b> }@b> }@b> }@b> }@b> }@ODԔ|@kUy'/|@@i9> }@b> }@b> }@b> }@b> }@b> }@O~WΞv~7Z]mWz H")ٖP ;J=Ъo[JٶlKz #&CWWƩ5 LWO"]?]0&CWz*tV?zuWDW#=1z8{ J<,]UCWCٳl=1]}+3+VO1S+R>v"G0]iL+.NnS+u>v"&2]=E6ՊŜ]ͰViQM ^.n_9:j{7wc~pCfsH-EYݼV^jX} l/d2rޟ}V6j|v{|^c]-!ࣛm`fcz16W{4?7Iy% sG>R ߘnvQofWmwfbW:Ǜ\t_ǷbVuޚmAV A~i#Kk}b@9b}R'}/xdO( &MLq6GLgZgT-|z1R8&T`&SܨRW4RKh`Ec0֓cghg~](F̞Ͼ;ZYm!vԠLsυ?}j~7w ;\sGڴx'7JMKpn2,Mp Km]Jt**l] )cur*tEhC|tE-]`:]z\(*^sJ狼\ mt~(#AWS^BV)=oD5v*tEhxtA ]) +d3Rڋǯ4LWOO8Е6Bɨ+Bkc+BShMVsޠ򺭯.j Եպ{x+N>t1w╪:[;m-zIi\d$x`-{b@sun6@}7ᨿyZqTsƔW~m/Ӻ9/6?Îu Nl[ )f9"=3ق~Л#/5oW=6|wue?-&W۳ 0/Sԛ31lz;v?6hen77o;o4?}Y秫CcS=aH, }mqsvoa/WzF.IkJn{ͱ^lʪ"|F){=bS2Gzk0>0 mߖ {ʩУr~{0VۙyG%w?:tK3N؂~_nE>irM|uqn.M{MVuz\_.޴>~AMFSIWg蚳U+)w}RTi?1|b~W^qP}$v>軉2F%JlIUTo|+ȡV[BY}@hXzӵ+ߺ%ԇ rj,=Kd鳴B32m>$ ݓxveܩ`28ҔPg'чnvmЦPIOmn~\_pQ=.<=r}U1}wY>C=&>JB/#"ܗ],3b/Y`f L6HicKI_j[-r˖`[MEvUbUfy}-C$n1rv)k+^18"_*S7?d)ֺJ6);JFvnk+lԻLa3nKiW7ha C9&8n@' k"c.1EHFr)G, Fb>l=4 t֟/bAap0/aqmOke`Xod!.FEc烈)+=,D³^ik=G߽p^}i85\h6 {~$1i2o僱{y]KZ#7l5k+3AI/( _LA!B< Bbʿ03p4 >w? 2l\ Orm̕VR2Ud}TҪd{~y!=83k\zɘY1+'W 32@t!6K,ILGGȞGwɜe,5F+jXq$rX8%OGxs[3vtK[77Θ{&c΂qhRяFIԘPҷ od*ouJ/i&lBEhm&#wDp(1L Jf+h%M"c4Ӽy3kޓ n !eYq4YW~lozkiGs &uIJ^ U\Fy*+# u`\<@O2_Lex '-ZJ@ HZr Ltym%.Q8eB]̚hEଵ1 ύ6Lz[#g:H93>4Niī qt;)N- wEo9 \l >ω?zGVf_O6{e6?]ҦB(t.Yt.`

; *5C~L4!Z8)a칅u',X&e 9g^gDˆr #DL5dIa.K9؎۟jkGZJ ҖT*J c> &tRR-\'H RS wQh%Hx5ɊC,:Oy߷Rwf HQX410fv^\@v؊'cF7upiH3}?OQɕaw5,>vM_gg7%O0w.GLJz<K@q0 gWѴ!x't.M/҅Z͒Y9>2OLd$%uKr+- ~'p}(\ӺL7cy=C?+_+*azBثW˫[9b:$i(7gqV4MSSG8zM{E4R5HwkNֽY^_k7fO~YgQN:_/a|Vڮ9HՆ?]O7}?\.EA20dq$6t4 oFa֙- "QeN>^|],d5r029mL㨜mu1ɦQjq+3)2"h\Wʽ >3f'ujҩ&pW}b׷~x(}x|xku~G+0MV4`ME| 7??C +Z910=1kv{k-@Bt4R9UI O=aj$xP@WQC+D6cwQS;TYJ%*8 4&[4d"mGbj#l'V5߹2XTAcVio#(Y7ԿeV``ѐ&.w˴ʫjO2@́:)q@!Hۺ䴹 VJ{PZJ 33RyC N@w_OۘP_CkiyV!bUp6T>1Zvg(8"QpBpE r,1%y_cƸ'YV r)72'Н<`AeP`h\/|h^/]I9C"hE6'cV''4Jt; &:j+Ɍ%tGfDJR YA bH K(Wfc۷]EnAMuWZ㓃n=H]y~^ "{tLjT@()ƴ%DIxZ pjFmMF!r5K0:m`vOhs$Ԡ'|%}&%նflݚ=Қ.l3Յe];]xT]xyz]ffOFM1"  Y$̔oskևQ_LE1Fl?Ոe;imO֧CnS,E1,{Zt^R$oHaY`%gl㤤F\L(3t`D1%ͥހ&5bkֈ7D:^,-OmztzӋ[f3W`!2NIUh2c,m C9 P$:x x*58T.GPa-fc|ɽmAp]3k[0~s|[zLڂ*S$3ҧ |q<` tAG z'~3&?1fz/Ĥw3{̥UB~V\7M QG?ۍ'y&aJ2;Q:)pC8*!WJߺ@qh)50Ah0 f7/mO4xP8dIМr0lWޕe׿BS2Qo_IAtqO>a4* E5)1b*ʢDREʬ-zv޽"U$$Pb<0.pޟ׿5XK(F_LrK5ݰ@ъ:kH&A'ml2IRh#Ϯ_mѢ/hbͮ+4!805 4y>A6$/5A⸀}~4.Jsj< % yTA g A67m\-K ̠'>]%G:Ӌ ;b~ay?`ulɟtVZ󚵖%5"%WTD$/-tJ5nb4; oy*P(cka+d|=ӳc<=& cZW"?\]-XJC=|(d]:_9l8&@W##7UlH DHhU.N2s{{]q6^_cyW7rXe]YO<?ů_&!GvxSr?09!FLRyރvZ²ޞt1w5K=|@W.۳͛Û/9:-7/՛ð'tOo|.1\eat-~Wc6T*nVqX9;?;zwCຣM}\nYXvkctQ,]7e˟e+J:wy <;2)}%_hexn~)W[]&;pc*8 XJ|hYɠr+]5~*OY)_W-M㫿ɹE掵߿bwygޱ1gXj@ƊG:f \a*uvD..'sՅ;TvͣVtZEsX< -ꉸaLgӤ*-ݮYaN V֘m1UwNj_=U?=(O2YT'$(12eV!1F8D9-[U1O[?/>59v9`be'R&]r$m4HцhybT ̸Y$$Г^#30Ue]Ϫ۬\zjs4}< ړ=E?[&h(_B_\.zu- Bss_gUa]n.}(k]7E:qiq61OnK-h׮c})@sJ/Xbf X#@9FE+A9oJsS\?$-/zUa/Z> f$鲬#TK'% ½7UC?\~qBB}z|ڽGUm;oՕ-(~& w m1jlVWlz%HZ0R+pCL2qrM h* .@ٮ*RЋWg|Wo@ [^J{!L?~ٟgiL4#mЄw?RsvQѼ}/5y %J-Lʈ[&S &LO^k)_;ccmID޸ S2 9W߾*&/{݄iI (4԰q,jgbdip2Y2]48&j2%ɠdIH+jSx.xb1˜kpd,cn?tկX/9?z@SʥSEUBAh ]fT-M&Xi$呲fIK +\6e 3{B9Z5}+i_))JHV eA.5FZ.}@*b3c9fJa^^pS* urd>c$*$6>P]T'ϼAK0&!S$˖T;o ](^WU[Ѣ5&ȁ1&VeObM' ap kTMkQ-vv?17 |7iewZu)3 FiRCd Y2E m̀L{̀[a93 R&sc>&JC.ei Φ 7MQ%cRFyJH`F"<'ҩaX) m갌RXvXzwmcxN6D#ld5:bT"3Kh'$7Ž('9^kD Ty5WAu) [dZk;1 eG3anO9A]Ay䟿s)K!YP0 %+clrf̿0DTZ2AC7}}4sXK^aݟ ½C>'9ǰ?aeݛIz\a VZ[,Bl-=CuQ"S2|_r\_-%L;:/8Pf[|r%VhuW8_N^t<*[^ ŋv ?w.NfVgF7j~vpU{o&ӟfVIYyᄆdqp;UY?/OpGn=Z AY  B#th5m;](mOWHWF*%m +thn;]!Jz:@X63tp ]ZAe Q2wCWf͡7'D1Ij3/]=NWUvt]j3m] tS`+, ]\%tW j!NW(S(!+th%i;]!Jz:@Q%Cvt(++eǝrSw‹.k+q c0L+kE_.~(TGPB (w]0AQ h~MH/DB<~T4)R+~wVC97&fBnkӏWW /ʟZZon>G6jIEfXX-EaN&˄ڎ3yŹe0"Q -]2:猲45,}r1+m&YUXs.~\ך?y=K~QIE qy>\K7YoXd:V01 r(v#?~9=L׳K#&Bu,(+2dd(4JE9{vkE&9a&hal&(䇨Im+Aig *ƺBWֶ J&;DW ӄvGtFJit|<B@tm6Etu0te%K>vHD3NSvע QҕEw9!X|;tpYg\m+DwDWv͡'R`F nw.y* ln@WmZm-]QksmBd=] ]q%:CWt(kAOWDWX<\88S\iPe'l0y|Eduhmf,r_P-e!U:\JK%q= "hŌ pyw}D~qQr쟂Xm:"z&<`x|5`G$_4ܪZ'Iٰb]("L.X14\8,2E%\Yݒhߒ H p5h JKL(%u+KigIv_~dVwtT\v՝+yg Ѫ4+CCt +:ckDt(bnN!B;Es5?m+D6?󞮶+JzJN읮6ٞ`mWW힔 QvyBQ>]ͮjLY;DWioepCTuO6D2UOW/CWl B{2Z( Q Q|+thj;]!^MW_ϖo3D_>T1>=fU5@[m^zѺl6[|gz=tk@evkJm9NQLYˌS3Ԁda?6s[Ǜ%oiQ0M\44H%&VT*= IB5 Fiٖk!sM` 5!NӈR0H4C 8Ĵb+DEt( K[4}+JE`bY4;ʄPN']YDWXųfGp $!]9ťdCZeC+BiCW)^+?W@JX0='z0VK^͡{` ,t"]Lʈ[ ]\b+DCw+BiTҕ`_'DXtE(e^ҕSJ`{HHW}A4\|7`%?[DZ#e:ro †7ʴKAikDL2Y ]S4mVt(DW=+m3""BDCWWh2tB& F2`k ]\c+Bk 3zHWlDtE5DCWDsк Q:誏t"Xsh q.Ջ+QbĬP絽k~(~p8,]J/C yWƒDǪ[%"] gTtp ֮PZ#]25:]Ntҕ°+lvW2X r.UJRaxW "JÒwKrJ`.hnP:- BD4`IpM4֚OB钻oUQ/33bBoOJlD4Mf<&<0PQȈ [+{ ZJ:]N%z2@T[sZ ]!Zł ek&UJDDWXȥCB2Uʡ!&"6EX BtE(E _]AGÈIɻ~A?ɻ~h8 ]yt(DWU=xC酲}@B~<^˥yYM_Q ?os@W?mma/|v>mfx4$@|6Q1_۴Χ{X8bcު}F; b[{dRVtMfŗr^Te;A׭wuCFb?y'B{Q|WFs1%1zz0Y'|uBvY뇛ChU+w7 9o=y|X/ߴ洺رr>8Mˏ j:E~3oyV0v\k98[|٪ FG`o˹4|͗\wF2lJ1#l*.p*ffShI1v++5(gwl?rIg $\棓aNW8 :Pg5_)lO*U0l\2W&Ccks?vj(zGްO{N?iˋl.nvThz^׳rse}:'}/R[;{WjbE1ϴq/夛67Z[=X;?`㈝(O'ԟ[]_d )Erˌ2$(2}f[@az1'9X/;rI2h l1Xx@hkˉp4 =nlQ3e5;=f9GOlui"|v;rX>!%[<3hP=i0Mg֊tZ^4/HZkh\r*'COXqk4KNjMkO?r OREWBXÊ^sWxh>&8׊XR9ƍ4c^haf{r}uAWh6G۾nG` ߐOt#aö#a=j7kg!t=$h!v$mr yMiu~~>2>FL~Zx.+5ܣdXvȍ&QYw{,<;lj loLȳ߿mfQ@\?b64Hx:&N=(s'|tK:^,,?-gmaF/Y)A@r~9 8-ٷNHΟIq+oy̪1@Ur>`UTN?O(^Y%JdYaUj'%IOG εUUU9JUsX*]qblJM~ZreRqә'U&sy5);k>/# ]YȄtpkb?TmFjc.ۮgygLV~Dw6[ S?ΎtqT;Ggy9ZEH-5\vW V.zΫ k Ud߼˗9U qWaN}R]YA,0˱E5)1.f#ާk%cvÉJc4Nmyo`OsiJ7ya+&E|jf+;tqŒ3/fm5qq-36`@[gmOӦN.8;DySaz(߾glfKz9cjCJݷBnJ+cZu}Ƅ]c2ɠ3t W0WImXx 'BAj{:Tx#XjS]4|Pv,Ԅm oSbtrFP?j;^ml2<jnI9/'ĭq],SNU5[SP7V4WcGEˑ^V$7r}D{xs#5E%di&X (rץĝ c`ƥ6cW!98; a} u-ɰ';W_ϖo3/yCUI1o6XmՋֽfFn>mzi[WqtɌ,eƩjx}2ZowMz'fS.{)Ɖt"JG˚%o%K\n1.(ujTz^:j̴v1yO欌  Tp/ڛ ;Ko4rh/:Fl31VJgfYKeJf A%Zݱ{I(Ii$D,NSpҀ+~hsՍ4[TtL9A7>h:0K.8&nK񍗖I17WSU]?cj#MŨÎodH@;$;V5ic4װO5N >l{݃䥚VDN\=o%"[znW\b#+7[g%hg3jL~ƞ*LyNnKdUN,+zNĦpWNTbͲIw<XgU:boL˪ټ“kҼX]Zdq*vXqfR.h0;WE[ s9T jw*̅!~Kcl&`4#]}fY3S*knA?,|?. Z[9GG{XI$ea$2 3R2Y嚴L$ʄ]L"iAp^{4#t _~骒4bqmmq:gـ:S놗nK7d u&CeS<9LC:QY~bxg2wև'B9hv5flTp42fFv斵k>ZmW*7)(WG3% )m%жJ?_O핲ZAZB` x-Aq!|-!0:,*3x '}t܍bblMZs^.&2o9x곰qx/; ]gл"̴en ;͹$VP_β/9i }FߍuJ8L4aF>DY8>ADd2HR+X̙YN)i Fz.cX:C4ҺyXN91\4) ݉R9T$\S&p 'yINk݆in9W 4ei:>V4پlܲCe/T[[Ɛ['@$$}WL=I'*rWRMnt1$,Yĺ "_@X6w|YƽuB]tܨL8i#9tMٷcw'0-hbZ8xEX41Pϳ[o:1΄ nxV#d GOYaE};);>HٲFn|̴p7n3GdxWj 8Ɗm(GE3PKbήZ9{ +d|WOߦ[\]_/0е6v ~Yп'H(WQIa}5-yh"Kg~$`֜o@Ei5;+(Tf^CmWm_cΧec%<څײ0[wA6IL;)[׏F+ pOWom+ODIɱڔ[iATLd2G//ϻ㻝 fqWӇe}'sc_ RUY)bG"4@ cEg(!kKU jPq 3wx(R-xjJ$7V4qSY=2hSF98o˜Z5=cIj!.񂶾 8OY4l̗yadTx-U#_L*֖ʏi`:O'0H(eVnyLg`t>B ,\]/p6="Um_\5ЌPaz6]e볛0 W#& 0bak:GP5ѷz`恘` fgbQ8G@\sY"Q<4D" =4D\eu? 2U%^$A|sw.f)dwu )h>ᝬJp#:Jjh27f8.%vjO=dg3%dJH-mcWg 2ԝF5kX @ U}2"eqG#"!c%h^ ?q6,enE?Edk3&5s MbZ hz|\F7D L4"{&͟oI!o'#z˽Q}lQ(L@h$8ͮDB*05~oco yvBq\Mn4l|DX^Te2@Rh_$]ւX[n&$ 0]S7/ˆhȦt:Ě]s=T?y?͟ ?øxZM%WdxP#k4}NHud'*EqF$҂*R JPܿ߯KWGͻ7АShUv4~$͑qDT9b@)z^"#w|In vp{j{ڦeZe"\!^J%M gito=^ _#͋h5Gt{ iE_iOޞ>R8ΐ.r Qd\:6/8H&**m56L?@+yEp+;t4̆"F}e)D͑i;0heK0]kF~&ޫL Ko;w%|?*y85p`VRᆫmB^xSU8ݶ:\1>:aPh(g V"'ral ݉ILK\PDP"_7ZAI>.oܣ,y2r>-9xsZw2}L5q.恜AhQ1* \w'ylw// 0+̽ 'sg)~|ˆ ќ0إEe}Ǩ:Ÿ9m#O¤R66 %JyD*c9-l:,pd]ecq̕$1 rXmh`*9X9z1o(s}vw+ջtz|$t(EÛAfxu0+Re"MҢjaz.9HyS!BYG(t"5Dh=С -y'aH^4U;k&0A k|z+LP'S{z&`-X&NNմ*KGA5G Ȁ#~5I|V WTؼcr>uFثP-}i&guRRq4e3ŹRf28uu&d (XKqx=Qe~sPQl{}/nhg `k {࠘ԑ/4{9|ƾh&#?wXif~Lj;~D*HA-WIcl.9Hq}R9A4WGQ2)޿Yzg`(K^vYݶ\5qqZqiIݎa{ha S'ւpaДvGlj⃞!Js9 L XU}ݐ6ǯYƛ!眗)_tVzIÅAK0֩][0jͥtw^TXf<ÓhU|ZqTsH̟|X nxXx@m0X6S uFFq#\j%H!0q1:E |_=SuHR`[Q^;w_v}~4dhHAheDR*B ")I ND +0 %O7*CGȜ{d#<7ftY{?ݗ][CG[~Ҹ5E&,9!FeH@Uf ĉ߉ud~D>7Vsde2k h #>LS}g X/n>eqx rDpi>@'zr@+J2K~Ac[竤N*֮{4LIw>Kzt5y>f֠L %~HmG_"!M9{>0ilvs>9#W+>X46Ӏzt+{'&BJ%A;1Ig>Gn$NA}JǵYЩ)z3%W JE 9yc3ehsP9 Bޕ47[=$֊a\fawXU,Q$+I-]yM)I"$+Ŷ.U+{_}8Nɍmo!:h4p4LqoqA)۸;keSwpۻR_>= kY".ZEa Vlb:z컞7Y^9\}O( {4Ӧ_N"vRhwl+ !&=kW(/Nz5>l2N3^cY8j4(瘰Sn>̧#me+j-.ch8ChOfj}Aw|=>:/z7rH8?y&`9CLeu.vȻ3v)ƠmQ^J6 E3}'~*dMccQnƀL" ΂"G͘Q1#BFHB U1#i[Nk;p~}~E/@XLKqq\;XPXG^kʓ_{,Ggm8xRM$E~>(nC_8i /٢F %4Eo^,/A4Պ y>,j'B0nh@qAjI El3y 7JNOq9Ͽhǀc0<0Ͽ"Mbm0V^^o夝 c~l}Y|2'WlK>K/~a9c ǃ08X. ?tړTޮڒ:6'~=(oe^CkѶDE;PJ}Z_p;7ͬh7GRn7G2JvxlA}8~~pJ;;~L3wKGg\6B"j8h{6kAt"][h蓷TWJvsZ14wF75ױ G(9ya5oB%'8r`7?xY[7?=Bd%n4-K9> f@Y;Tw ŽE5(I'I"Sӌr-߻xt@SΙ:уdzY<@ґF[|9 L]s0kosϹ"F_ k%i4ʜ) eQM~obfo7 PHeD Ơww~?4*-lMyZm(-d&0-Y~t8~݂C/9@B{H֘1d=TN*m'Ih\<seh` ],0gHPC$QAQ/V %g7);Q<ӟ,L(c XDj!`5g8 Uep)J@hoPݬF90 Kˆ D n`=qA qj[d7 nV֥wL( ȒA%i3GH3D(jBqo\0T! ܀X.> v"w=h}afŅ1=GZi$*dlcU^es02k k QFӀQ(( H4HE2mJN&lG.:0G_&# +*bLղpa[Xˢ$fJ]NwF] BiB;s!EG=j:|Wuz*YSlk^ta ]A&K,[,)-2Ki8](˳@&[5.zJ:>`Ϭ0t"IO}2#gGՈ&&UGβ᪣N ]B CaټvN K8*_ ۏ|WnOuUOǫ qq6ձI.8l{A"&&dz.yZ,XEܕթeeLI6D3 4/#,>9i^ Ƿ̊yQ4֖ g2M&xW4pkɓe=d ؑJrlKS&I\zo&#n0䙮h9#;VKIk 3^kb0\;ڜ ֤kB_f6g|fSҏ`RQTǴA $CBFLn}N&/A^!'I mvƚ?㍖YD[JPХ[Uب a8[B`H3/NegJݪBm$_vrVKF;:ʐ`E3˰ڿjs^8]c]VUֶeݦ~C# C `aXm$E 8XluTާO>b-QwlY*WpNǔo8W4[W* Ͱ$}ȏmki5dTB4{Q dwWQ,똛035)VB,3mV,DԮ?$`]RI yIiǺ_NͻLƬ &brg\(R -G:k2􌀥܃j9xی]vvtM6q,[7AER̄begw 擇3J]|2#gn\UD=<5II+VuII϶H{. kgeWߦ5I.*dKNLz`+/y8m%O*h&5,BPw45.a>ov+]X3ya??Sc^xih1ةQ0C$㳑{` `t͢{) J"@_%a='1Awb au߫eP:"]p|{.:8{ iAKBAuWN k,/rLD "ߤLf@3FuROW!|ВOn |-sp 1Yd?{ b'~)?? Mo;= B p^(УT LPyfm?cgo0|f g k 8? ~IiP.K(hVUjMަҾTNRaOo40SZj0{lK>{hOi0:l9K`NI,gr[ :kp!MJ&BF/8*7zz~J*#BFr(@rѭGo{wR0(;Vݿ"{^1\Sg)I͜*d|Mʃ͓RLzvt廦2}C3Bh̗=\Y+2N XqjZq:4nX%܁d,(Wx8xCI}.%)o|.<klqba<2FaViP "Q.t7d 5?=j"o/Cۊl4.BWc#z{+'טĵHud2fh>"%1ȔD .sd1&8\(D"ag]P.AM2tb 'p-R&̤aV{?x/5=3 !)q ,u$^  <89krxgښƱ_隧l&TacCTef*$rfInO*=T_(D Nx6˹~e|LyW"So &5pSI͐dȗO%򄰊;;i7nJpʌ2#|UƶiaσWR<\{$8CJNCte=r ؈Y>6"UђVM}sn ;'hH_5Ւ9Ǹ=aM#[PaF=%4\CEF&Ғ?zɏ+ylh}P12ƛ~`! G~RUe*Jֽ#+DWd|XO33zIĄoβ&Xߐ}G@Pr\1T2(WRO&dI O'`daJ8TU+b4?\}>M}PEo~~4hm}tH}0'Zd\(SnlHLi":-:dr sB<(|B}q~JUMmZ0$`k f[T!†~ծ7@wǕ1r`h%;vQ]s=9d $3x ~EF(~"4Qxk`%\-:6TtL)s~G"-BS-2?0' ¿BE"#0y,?,F-2-qكتe8=&3W611e3Sh8`wFgPY9. o\(BKIQI$265B*w5 ,gk2ŵh2VaD-")H.}s=qat"դsac<^%K|U;|u/T rֽT(4R]ZX.8*S5pXFdϹ-" j_ !7jN޿Wvw xCEBנ +:Ҕ+Tܿ26z uF=|aHLJh4_Nq~Kx9\8}.iGp˗*n+LXZ/PBq~fuיa֣= .LXPbرȓŗ0-,z/W\%=g<$+yh cH924=C_Ź/UC" Pw5)/*(f\ ~*j'.Gyba2 j( '(4&[]ư  e5Qߕw#?P}.7V*вmz J=I9 g*>%۟ǻv2Px^>():-hK9(o"*(g/r D1gsۄ/?s ,3PBGEz !tʺwM7mJ_DGNYKN=jg֬0@!Z=W 3TԦ _({?jddz@N0uG@E `qG֤fKnXYg6ZdlxC [4z&f{C`'SՃ>jH F}!;n1 3zFb[LFߋ'5(IrڒY&Wn!QzэM(Bd#U z +67XT^1WB:Ѳ,"S; $=7&,M\8F91on0ը 2`7d:p@e[d\[KRhO#@'kzz@hqPYGP$"PzvP¬EF fFq_uD2n"E2:RR(=HCL4kbl- uT&tbFh_w垓)֘Rx,ewPJ1.TcRJƳ n䰢ϧ_M gbC;h1%^~㈐\4a'Ј fa#D 蜿Z~nOLAFqgr YuDŽ2\anQ 0!'@]hP}X`p /cܲ UdK*ciz}3!?j`!HWYv7Q Ej-_ _3lT=pocuE>k*+o<h&[ŇjK˥ăpIQmqo!#b.<%)1x2КC9L qw[d\+ŹP$FGT|ލ멻>jgxKڿ;$N-!:"܀APj#іF$!)DYC5pdP|U44E/+0f{BSO4Ղe@7+L-$I[x 5xb8K&oHt4TvQ{a0qqá5|6ّ `{@9 $t[q*f0:yXoz!6{amfMf(ǔNJk1XxHL@[|s;1#+װ~s"#> KC&=ݵvjTqvfӓ_yxPj7 _vG fyf9&qNϣk-prU[r0d<~KL&oWbZ"G =x'P:djՂ]J#!Beg, "ښA@5~6neU7ԫK| ouQ*h(:{k@gۏ ޘG \AYz@ya^@ Jo.;ۜi mV¨(]OC;gN60dŒ#W>spw B@7CʷRL!m -"[5z\*5$>\6PDmv'\BZd\N=1o%>4q>f3 >4Tlv:%xwzP1Ê~S׍)% 38@!5|p;P (C)ާAn>lʼriM$P(XxC,,\e $(JC=EeQtxJ.?j(xi(!c_PicD4VL xx׊K l`ɼ86'Ł{47UHnM`;[JxMqfv-ab..{ʒaWT0&ˢ7LòDI?nt YCr6BTG`Y|; \<ɳPEA#9wA_exdK0/D_=,VF޴hn8N+ n}ja\єlЊ xh3Nf@l S:dʡ(3K&(I"=thέVEqq}nڒCUbpgFȸlk b#X&*HŔx.>s*ܗ۩̺v:8bE~oܿ=#DP&` d9=kGP@]:$+Q1-2¨aݍ!L4.QoyGQ"՚FЂBpn QPEq2߿k|;|)%qj (іO [;Q֐+I8當i*AaE/+*Liό?b IܹmǸ,cdt7ZpPb|QIvOw~)x̥7`3+G"I3K3j({z4"UEV>7uR4(9sk?NkޠcT!G4Ѿ$37 W5ZpqPH0j0: !3Ef4&+rk:c',o=Lަ_gG'r,1FDT7Y弬`+y\ɏ2Jh|Vys5,9Ϥ28Vy=T]vҐgP[JJ_]Xn}WǞ,uESϝ9e?J07_MآD}YIw3ʗ0qpOjO{2+,-o_7j? .D]$S]{JPnΠ7ՙma8n|u{gh=L䍯?#=bVlwYYvْ,KNxf1_!WὢLP"=}pjN>>EI #4B9!?٨46di#PpZ\ǩJҫ:O  {t!KH NCDd#CL2ai DqmG|i^>аO2Mo?V]{ׯ\Jf=ʖO/wE[+t3iŨ<+b2Zh|Gc ꫢ`U(in/'|ORaz-5 y'xۧ׻sZ zcLjs/j"Qa  7KI+yF|jqmv;R2&nmgc>FSeԉQ[6VQ %˜|&'̺~9|u &I0A-eXkX0G;5a +L7 5L*j檂 piTtYF[,Ks ǦXŽQ̫f)Z㥴OF@ KS4l1Q"aTe@R*{>[nSTǜ}]7(]$楀.UDi{އ-2CܐYaCX![h<{uxr9Bɓ~Ae&eR):MB2njU`I c7lPfJ[hZn߄`Xd'%rኖ7ݑê`(>_pQzR_#/@ĒLjX<7MrxtIaqc㍭:&6=U(z ^HQȂQ/s``<F}٭+@|u3p]mo#9r+B}YZ$ ,wɗ9`7b˖%$3S-%$[mnoֈE^XX*0COqstsM9 70<l[?ҁk(=-nEv5Yu/| *j~$QJ qQlE09  f!l5D* E]IQcL^x |XZyތJ\Jj)Նǫx]i8U,bD60I+ ]e e$#&y.uPtOGE%܍O2̇ 5(h"RGNr rPdltCFnLRd{Ζ`U'_R:kiրu)SA4\+uiFIP  ϝM\f$yTT?6FMF1S : #]xmXkfKȎG`idqƁ /#*&,{t %*eH"t) 5Hu PFPKfs?oM韽?r `dzc< o|Lt0KPrP*Qxg_nzonؑhKq_P8 H)':$.Uq_A>~_9C>O/~Fֈje8@+ZLq~<6re[[k=] Sc џ ô=!t{Ѐ?c#2ڂVƔxω1^u}nM V*bXêcc1e:1:vu>p#UiQD aXuL_FAXe9\APEgӗ|mqrc/?~6CV7vbռ4 또VtS5tg*p <&LQT,U7~fNg??}y2,/!ş川߰|z\llՠɒ8Pț$qͰ`(`2EP 4h^ +o&{mx<[Jn+n,RH%ڽڜ _git[$MRZ+rkO+%dr9L, \[[*VZVfn8(r =:BC1'o}WٛeYVliZy2~ٻfp6񟽖S4m-xYI6ɝQҤVb'V2 XԲ?yC"䫭 ?N UŹ}.ŕڥ@2%/#@KL.q j[jnDjNDjW\R\7L-[ hȘ wnkdl3Z-bޞd_@u7;wREd;"<#IQ$2 ,2FK)h*_|䯦8Db]B>. JEEV͠׿{ 1[ M<*Kk R[xۙJVrg!wF*}ܙdPvTX%y%⪴hĩXq{4Ut^L+V˪ƞ\@`t%~oSڿc[=5*aˍ gj2(HܸsxHpG.*LXj ZAE@P?mSXw`rEK3`d9r0ԴoiQLuf5sڢkYmHQ+>F;bm?Q0q_핢am <\S''x5x+|U ZXb3'cߤ%6{yd֟&< U>], byCOvR?(OK3O|/t:8F8t}=g 櫔d .E $BvH 1tCpb _o-2^1o Cz9\hbE1^w`1*.D< @cL+7w-S2%IٌY<hTbpW47L- 0T\ҾM%^OlB5+*21R{]oww wSBZ$P. EHyB:iJҪ'o]u錼<( Tiƾh,YtSܵBPmb E ,ebwɤ=߲1KbPA._`@ d'd|GJ^ vRnlC:?m+\BOɑO< /D*QC3p6=\4urwmvW2q!*M6QM\[DIhb-2\_k_(x{FJp@@`|6Gw TW8*q#Cς:{**{;-fTqh%ָg(ͦ^3-Drx^&2(4,6W`,M|q/wE3_Aeh8"ώM}ܖ. ԸJ`pLiB&42O#{hl,yuP)dOt^հbY4`Rw q_ [Ht 72_8&.4u족 ~D[sR MBVo~pq ~+IPґpsi(𩺜#*;m%{h\ȽMЅ;h4Ro0 @eg)kY[-\G|b0Fw{ 6F٩x+|MDz1 Q nV,[> +]_ #j +yZB:ߨFHS*.w6ZTC\j`9O"`J4^1l綫=_%DAA%;4 o^Ԁ]rJ 3V{ I% g#L\EO@ieͩ~K| !V8\e:;vņ;5h.dI jڲI;Ҷ.PMiӈM*^$ =RWƯԹ(a["}iWEc6? %8FZi-[4tڹ&VGwʆHpRpZz #҄eÐ [S fZx,vG1J aAtꧮ"#pNp x"L=p-w9g?}9-X22mD0rz!STg!h5{5h%"t$$T gsgB6z6uKb f)-AS+\rsc<Oq_}~v˰`1(:`ЏpNٞ(D@@Я%\{.nk;Stm>޴eΞo+4=m/Gr ._H)t_}W_~kn:#roI]ʝ$ٔroVbZ8Nj, ~5-D++ '4%tmg=IQ^_ foD kL kբ }5|?y<O<%;Km橯\@ x 0B"+t &es+d0ຐX`nJd7%~dhD/u>c&og.Y}FXYsB(k>5eE>T7+-?BVޜOtKH0f1_f.Ff !*/Oc?{ .w7o6eI|2 F/8/sMMtM[ -||N3 C% & tYy(wԑH% }4)zCcObpQo߳aBx_ a ބsZs&D,ޏ0hpJ94DŽ*?c]n~vI$fg*йJ`D!F1[9&3k8&2c̺ά?|ovl(.Wz70m_/׼/Z?)p:Tا@6je?+J **ķL]OR|ΐ1SW(I.ϸ-vZM⃊sbZ!~78JvfN?,c\u]glR8IsQ)E-&{hluݻo۷tNgɳ 7)Ob?sR6&ewxzFcb)u)`@!I+s~-)x.dòr"l2PN> ;g@k$6 CJJIIbcu][OI+_Ύ4ixh}h_ve`.nӍl(UT$$**#2#㋫+}4%h vf?Yh[(ۧo'C((Cz< Ł|`MO: IhXZ%5 g^5匶lq5˽LMK|ڜІ梳.d!es嘼b5 8:&9&+TG Q |k@+'FjE[ÖpP usw$ NB%6 1@`@.n u_iIZG6<Nj&NjAk[~-QqRot݀揫K>y>IѕRl$"r.4$yq@Uf Q~1VY{?EY;PۈlK+llzgy [! =&#^aGz=+Xmjs5Vfs6go;^gԁ7$^vb]XW+ӳV/Zv{] "wSN6SN":\=KBD@,.*ѿRA#\XI`KcGYC2:gֆ#4zC<|3`Ob0!U(%Q, igx9D/'1,ilq(6ʌwL 47d}܉[ߡ1\d8(cn jT .R1ҿII 軹se1Č,WtFX[1Wf%y@[g-6 x:=f۾CIgǧ7'R݁Dlmj/n`QAx,j5v ?l4%ak3t aVɧ}ce{MہL12 Y(Kzk1Y4NϛῗvtJoq6;I1'aOE1L(i}2K뭌64!XtaGvE6=Z޸+7XTޡѪ]'*$1%$u_+HIvbQoCpX܌)cJv'8=tD+NY$‹hD)PBD]H60}_oH\$[23f'_Ͼ4H *Go!"c781lR< &NB3͆LƵ,YZP ,‹vMR B>V x N:h]9O"Leyݼ gRjöZ8j{[ rr:AD![澴~>EC>M7sj$Xʹ4J.T[c"BcK:'$lڤm(6X9F1}#ٚ ը# q@ZjcY1=F77R~;l&AdL6Ig@zZx͒ƍ CqlΠ{c)mm6b+H)KZO;l@Y%@vTZTuO66tϣJz^.#l ^y< i!+*2[z`{qE;ӂ~k{`a|DLu~tb 5e)*Rnhqvhrm7S:Z^DB K7hӏ>͘E):~flCTHD?}bh188f}:j;7ZK0FY&l NvOk91M/c6z&2}[jL- :+;ڵoCZX :,-{f *Oo60٤gwM@Ww2#DThͨ}H1qJWS֜V RKc&&8o3~'1jPO/hZ tjEqtԏ{T@-EmOϷ aTypVHV}'8qʝY[)~g. sG0ℍM Fq{M8AҊqFny\qˆQ}Fp{!;[?Y3 ;z){Ye2[Jlhj] m3pR@)'v b;_++A!V/L ޜ5rҟ?ou"[h{3;3wO./ y+;zIN@|G %+K11(-x w QjߒB_e/2(o$Ju,%t/g{zy\J{yA4:qMܑIzэgd0ƙ?tL@t#:k)*F#6X )BqD#:ޠP _8:&1 iݦ+8e_ :^0h`)Oꔧwuӻ:]sSxM" 8&X$^HR N"$I&㘌JsH< -maao7\Wtt9;UqHc߯KW ank17pNGِy 5tRBj5`C_jckC-T ٵ7gfQygDiRx\?~Ͽp0>8=b^(2.cȂ$6B~r':I[ 3H)hrP!%2IɃϽ(Hҥ7=DaQjeTfd[fW#.nTB&_'WO:sA;8 ~kF=&!8y7Kz4#v|8H\"sVL +Y>g~#]SG^Ժv|I$s|L`%o6\\$KWYْ:W1#"}t0 6%68pw{zϦ-ÍD}ڲy+?{N}h cWE1,~(.it2>RY+1R \6HX!g}v?r<~TKlNb5T(VSX3 [&b+(+wǦ--f$MKVuf D%᭠/x^%})@);@&$P:ʐuZXSѦj@p#xDM XqVOJ-8 <%Dو$m5Y5*yt2"ܓǰy-𘬢-N.'c?f8 +CŻ/V<6#!C7oV0/gZ0?~{YZˋG7N~!Sߓ|:;5޾óZe:? ٸto{+p8\.g۲cac^ot$൭o%^T=5٥G_u :"p3'KMJb2\v_zXB)0 FX3O'{L>-ٕҖύh2d!kI0(˴6DX'-C32}V11  a̹\)+ >f$Iꭇ(.ɘT"}/G AK0tB.Ys˔;VSXMc55SlIj}6 pg01%e+PpL0Z rF>2 < fH;*@X JRȒu :5!Sm;2`VncJ^d@@9ir,桀V;RlDBJ'Sr3 }4X6gg Y1d9E{ICmyQ[v6ɠAv8$k5b )ig?Frsp-"kήzL>G!$[9" bQ}A9v]( ?mcn!vOl:4X>|c~;y_ꍋ3^ӴsjƐ~I^huEhAĩ'N #'~;Re$q"_=nZwu@i2}{-@`gx,f5XX]1l˾dH@jl5jWBqrD$Zg1b{Q7f7B2k ^)n>VcsI"b1do ?Wz_sN^ƙF9O+( cWxM d;\pbzovn ' A<꟧."QzKх}pۛTSf$9֭.Ω t%<3c.a]ضfɂNd*PZm F;u%T,h {Gcϧ|I3ZVgB[Į_i _l*b=4Cs.ZfS53+8cFEՒ2k ZufN7W^,>:Bl%B6kŤ`UUѳ,hO5 pgcohRmPk;U|.NOķ4U`_2irL.M.e`g^WZˢ@ě/aρ|cR?xy- 4,ꌳdFUoH)lN%eT]|]sBd_1co4~2g! (O,R7vӈmMkI~{Jt^N Vp/c is=`FHI]/xLS?UaU<&l2y8I&S#M꧅uK]bouZ9f􇭔OWF=sM^k *8;zrb@D4ex@1iF40b [5j i-]e|<ƀ{$ :& *Pbē7-}:z3Ψԙ!6l2N{翇?l۔5STOYS=eM $1xL:P|/ ZN`NƒeNSϣ-jCyXk"I @K! 6폶MӍZ7ou$ ji_Q ڀ[wkuC¢>}pik)7-'#=@Ĥ*\AamP- lwr*`"d;gH.#5,j'j)k#E8JUצB6l*]Z|$G^]Epq}u|t h̝{̖ϺIwy?틝ks;oWWV1T}^_W׋օ,oz_tSɼ*,UY`?#j C.%=3RZ(q2z'Ec 2ƊlQpu= j$:QڪN 52X@Ә7 J20[vNiι-Q e~1 TMҽ)YW_GY٫0详~AuOȅ_@6-;%A%,*&b +$CUt-` SE$)Xbgcs BDR;¦Nd-9[i[v SO?A[)@V Z/+Vm 6^ԹNl#'jCt!ʫmo+M+np6[κd-" ?W=pE1Nn!TnnоI&#[?Dn_ ߻/!5vGDhBZjQ1   Qzsj"=YZ\]loJNWduBh}6hDedjffb\$Hlok?";+/ٔkEB +}{[NQrSTZ/8Ėc 荽U{ +,`2'rwg5\%B,CH䋩M) [BwDJo47`<,n o䨓TU@gn5 5_P_!b]=áw}t{kɘ eǨ'b)\SeyY4 :[^D]ZThC1ɀG 詡-΂eco4*uGL9Qa栎̍v;ߖНvKb+OM^*cӱle jK2Lt̢U6"=c>qu'4-5bGEo.ϯ{$*K}V#_y[`zcelŲ\ʲ\$2R!7Đ 6 2/"XJRY$R7:bE;L4ΐO]h..5#tͯzgWx߸>8_|ǟmV,7E#ׂ=L;c΃az}=T=~wa_Ŵ0$rV𔌀("Jso-WH,pX 47Tdծ2 jL1k6 _Gh ?揟dLdH}1 6#2X z݈ @kd)`yT~SNټ(4+-1ϣ?(tpkbŮ/fsiY}J໴two~d Y:w=6igK_GtOD/\9L/RuT<!rz>8>wZg3x93?γv =H#B9nW;4 D$\rNA @$0U>xc <%jǗ@Zba%Ob‹S v :h}WfgBز;a[0*ޛwa.S< v7#D%{.xK=彘Gґ'j 3z {&N OnP)SU؁9& h&j7Q4L^I7^Ci1}ga(쾁|681g;6Qzpc'"Q[abvۋo4Mgv6Ibߋy~D&jw?9&D p0%l\!waP/gݴ;Q{mD~Lj'Ȇ^\=0;D&jw?[avτ;k'js,V5Gu$i8|q2++ڍؽ܈ _' 0~HEW؀>Z}d}})Dv+\?S8] '+qT蜧(&"}H4~W,.~˻EXV.RB>bM^J`\JI\4Xuk\P}rһSDFWQs{l #ӡP#>>*G0@)h8\=reە 7|}̕ S˫^ZlJӥm陖^::W b{2+ݦt|oR[h=5 $qQc81;@C"=1_;)T$/U->sm~Y5"릢͟_`h~y_:! UK`!9s"XT iRdUs"|N뛅U[LJz|W.Z= zH@X;k xQn@Qw1b;}:HF-@Vl8R>L@Z9fbdL ꁁ>|H/*@ƽyˠ!oduu "iH04BZ$TmK`̅K(4 V5=NשZQ-7v2W`$j"4y:)ڠ0ו(IGd[\%"x,Yl39*j/ihQ. Un~==|!gvW;/~[[bݗ?./:ĭN>GM3qY@9nLUkO.M'*nr؊ JX>.I&efdF{d5xv#P oEXcK\`%lRmKjFÃ2V]r6$.}.QR#b</c巧!RGe-?hAUM}@ǝCޔ9ْ)8RYxwJ x%ň;,b~`NqE*/:0VD8EzfN1 vPTeL1z?W>bVY^b׈9ej?0.D,bshk̘KdH1ܼ=!Y*֔,?i=hI'o9A!UzTj˷/L5Pt#w]-+- e R}Ibwg RCiA 6ka]-Wաnjd G Ǻ& dt|K'c/xU8''&}4*L;˜(޲/s)H_8nݚy`CG;kFTwNh`mi=&&}QT?9( Cu g&Rv,٨snXHM%֣N-\j<byg檭G 1};2;Ϩck)-!/#Qkl2R 9%UE]M 2wB.XR'&bzU9E !Rij¨Gb'ǟcΙXݪ8OP%O b:DE(:Hl .%+b  Y LXc:um,gzǞ؞,Z^<{fZI6ά3UPJNbdMz*$ #%.U^2xHnJM#t%cVwP U*[MN{/~~BE&óحgf3elݔ4*fHmXf5` cZ[ES*vXdtiwM;Z.M_ۥi{,{ٷ&MO|' Nyu 4#mw>W!%0jo`O)WkFa*wD)eXdXSĆQѻP-}KcIiD]b5z=wZ#Igw `r=Z2PcX(.KmVU%'kUv1:HKѕ4jJKzhN4(#;vXsHv\Ѣ<|k`CL;MNG҇wX3yhwV5/t$ymҧhl&ڝYs(sv;2Ge] 9vF܌N sB;;fY #@ s](}$~nh6JEsN :3~ݝ-xhwhw']|VLb W5#[`-gc&i6*hg:6{.m9 CoDl7͋}!Le쮮J$Ndehy*~MHSOЕjC"Wgd4_C}^)jk2+#s9?"d f/g+7c):|5VDȌ7QSJo",%٘c$0Ҳ@jM>Vo{Kp")Y6S[]b .l?'ħC{VJHB[{C*Dn0x #UrRSkZ˦ȳU2Dzl$VѪ.$ϖ(VkљN6^x^X;W\v6%q6;; +pkUf'Fh`  2(R|n"$kƾ!,U5XbQ_7EX, ggnAu7& gtwqk9VF lBu!X6|0%-c%rBie_TdSp&R!dYgkߘnvjwUqO€ɷ]/ֶ$אۊe94\`5kT K:Lٓ̚  )dsBR2J&wMMTRNŲlVE `f"|=ZXư>b,DC0e:+INLqRaii-]6StQI3-ƨoM%-|'n0ng ׀ KdAP@EیVsNZ7,uP2+vN226$]Nk19fCNŬ).u f7 v>J_w. ǠjX@RDu4 %%bURc0,xMڛT ) _WSEbs2̗Ogc d,̿T+K al < s\*A,(BK_?z{YmAIJP}+z8ft6 h@ HQ2-)9TjP+59U `m;'ӋX8ކl)hck(R /bp:*ˣLbs ي)eUQw6.3uæN& vQ$87(8p0gbA=H @@.Ri ;*,D.iokC" EZ~0ZJ;yJ߅z+#_/9B'15U^[i3vrZ*NX(j_=2 26Pp"`{;A;6>_.H?x0C //$P&w1ۢV/r6o&R1tVD@*E[#PVD!+Ũ$*k., Ucp ÑJם5{T'C8=󥟗?$S qtNEID,mԽB!@I!^xcY:rPZlWmI-%쾱p-2o"U|ihՈ7P#{!WI;@iӭ#c:^ou ƪfF C;=md-#.j'75|:%#*W|>}Qp?A%y%K*ۘvrmX^/*V+~wd9Jכw8.A5|qC߁>ӼgfN= |s}9MHԴJ:>d@܊U6 r<<~ y=AIYY# u#A@/=Ȥ_S[L؈y!UA72>r2n4׋~ſ-@ eaa+iI͉B(E}ŷ-4?KI~c.qsz}}tȥ^K:cxrn9sfa/~̀3?J,Hnw38xe_.]h:Gxyzs ן'ƐdM0C:~JImq-omw!?k>ؚF7 /wkCn8'w 5/u|׺Αh`5v;JtsAA*  |q<r9ZҸa=ؙn ~Ncd('uErKI. ~zzO0]tuďQX 0Q=أ͸ᔄy;8[X}&d*{{*hNN)@cUv#u'_+G ;;*nFu%}GD> dHoO{zcs:VAM9V:If%A^)3B{t|.8+Y jw}LrCyÄ-MhBl%kkYlcČ_Jyi|#lw%~uӣ/jw2EExvHZOQl6 'Չ'^|q6dm˖8*ܲ*mwCӿ}ꩄWP狉w7c⥯\`UOһj#Z)cb\},y`4^fL$wv0JV`ߗ!bul0rqU}cyBoԠ>L7%ןс vM>) 2jSZo{Tڷ%b(RI(9jp٩l!cbR1,K*Z݇6{3#Z#ܶ픮:SlJL2("&h҅liMMl7s5IƺwTi; 9{Zjy|R+@*5Ia ̕< Mwf~qM- w^gwETa5|_޵6r#g,Vȇ {@do?,Iv8^?f&z%dZ6 %*U@*Ș&A]]CS%PIy dnS*50Y Xku)U=To/z Rj+a;nG$-:gR^_^2츭8P0oO:wh=鷘k|rly;h޲yȅ?7?t_uM+8};^M4}r\36?+賻J\!6٢WC;yw4[Җahli-k6$Wpqw7}YRҏFJ5^\߬":ZMr@!/h։q+(`dj@x&o/ bU,e x5Ux6^=U…Y(*JZCz.^Kۑ_#%HZGBv=::;@,K8A&|3\Hf~B 3-C/6m -C3:v< >ȐW`T.4L%Věg_Fglȋ9xEΐ]b#-DkFȮg"hV]D$orޗبԓ홭dշ?+ִHO1/3Kdn HE4̦JPRX?W%eܓtɓ;s G;{X&9)6:m΢?[6[dN B$!JW6|#%k3tZyW3`#nj:3ܷ ِPF.4ɺbi@ LB!`+^ybjXٛC>͓w__|D p-+Wt%GYfj~{__!_L`rXY :fA-Yc7_ҙR7y-C)UT6J% p;f$Mgx:bw֤z.krIwwZ4ӇqRˡ/of}|Dׯ~r:d!mr.7]7Zj$r4Zy sNa!9z0Nѻd\(Ղ! 04R[ Q+2.&Y-eL.h˄0c.s7G5ğIiP-c*E+(,fp1ר>C;B$e`5i"i}^;DT,#(geZްvJ-7ƍCi%T n\\#pe6am7*mLo4 Z`ǢVFۦ롆z|顶]FZ3Ėg)_36qͤ?Y HǏz^͛K֊_g#[-Ljc~kLjM,ޞ_Xd,@!GÆ!%!9HޏdWA#dOg_g9aG9:c9:ϋtRs_:4[,ZsHyf4H1GV1U:Mbw7oȵz_;-a= ]@8l0_}{!.g`6*Fe)X,GT9R9ygRtAv7J,yBkv+S, []iיEp[O-tI(->52o" j\IמZQAgX $۸&)+k )ܵIn Tr_4JþHwo?D>ߍ Z|[$onZ759}M(/Vx Qֲ7j6elSIr-qk{Ayg HI変v2HUKaϹt/[kKFqJlDU,FȓhdrH5]U#MrAP Rj*ӰF[%m)adnwwY6OM lBΡ8iʝH"yS-M՞wD$$1Qhʻ{nԎ FSUF*2j=DWimJ.y&4SRu.wZU]ʭ_[Ajӝ5|lFD6qj2N1:yLaWCc^yX%rJ.FV &.:#x9I{z9L`ι!0zf\_~lg |0RlgsVm*12ww>pǙܕߌ*9?4lEkEۋ2^ D14{u%WZn3Eۤ;`rcs&t^Gg#;=tXX3 lEywX[2=4j\ԝcՋή4ހ^jD:Ě )0'C$ʷ+${WcG+c"R#>og!,^EׅaCZ%7ɠÑڪEyT'VWUV4@E$d Y[acrs1hs)|F~SC+7/<Q [,AImNA4~{rw* ݌ uTvۣ1ѵhϓFQ!MRwbKis1Ts{mٺw\M!XӨu:=)Ҿ XcY8=~$d3=:dHY_}{=\v^1B C8Frx1nMaXez[ftzf5Rc >0`ʨ$-YK36z% w7-Kl) uD;u)L ~a7 iyfĆ.`<;2ZC^yW$-,8$ڄ\~&)ɦ/!$i[e2rG=8q͚3)QZXsN3P.γe]4M !Mf4!3# m;淀QEYzIJ"[|dGKy,UGG/#s)KQ<"GN,a>!v*'k鐗c7Vw[^ZX3Bv#Qo⣘T-vy>'E~N,Ɗ~kK2sR*oQ9~sl`qʗk[1Vu;P~d0އդ7tiQuOϱd&2!rͤgcFYO >d{'ڷ㑝;v' k7OTdtdw5d$Ly^1#>ԗis?Tw9ǯgߎ#O/6П0p j¹Vok)#q"̍U 5k6#t˶O) gU݋ .|?+ SEjѣ2f=Zq~(C2>_^t^⅝ǻFIS SF|IyaXOfFE\NВjor^غ :E0a`BNx#c>ylkklȋiL+umǍ$Jo,di݃y;;A^վTTenYRJJ.d ҩT8qCѰsECNU@'6UMo;E $͍NjS]RdX Wwfl\$ufw(ܑЙeJ`+:te!;Ι4HBęq2;TKNJ )KCS5'TvBJPqjĬ"YWaQS)|+5@ 9;rjgn>XS8Al{8sj#zl[[>[@/RHzdH==S8J{IQT!R`O<=7 #s#NzT0L`B鑧G/H3߇B|{ ЬWw9)2{vA\ة8k$ļTvaS '! 8cK0lnXc"ΩEN=v-u=vE-e=vsd׹ACrj󗔷_~}Ŝ(YRu=z;,)jZs̀=VCWٛwD^jFk{TRe[S~`iR;DkLPW nw2mo; .UR k{^/ٖkz6hpfjƦ%lr/VgCf^Kk !Zn4Mڌ%+h)yF@a[ 3ڶ<[?. t!\ֹwo%nX+ZD +4 ]Wթ3)ڵrz!$!)kJQZ$Aƶa.aVɝnzy/Â6`sE],|:ϖK&X5L5c_ }hnvCs4 1DmPM/mBlPMkCuQGfi3#/?.߼=ž{r4>Icsj>Ic G8"0Q4I X9!-0yWUw. JRґ[CY#yy.e#׳Wk z0AU),Nzc#M!'^փ2Hy"`ˁqU4xޤ$'Ϭ,k󕧴AyQ{?Re"b,y~u-1,G|elTKR*fl)RHcK , dbMM ;NRk2px>9将d6j%"5 s&v{;N<3e,)QY푷ZPqc/$ XdeH,+Ңʐ'3DŽLLv{KN<3f*ŞU$}ThvJSaZLɲnQo|*¶>pOZȤ% a^ #US)OZL"EvGN̍|JξsN'mvjn=E)zԡP;s@vjBqqѺ 97N펜ڙ OډQIKѩSG]P#×vjV!0sJe*=N&mL.~xRvxřcU+GVSMǘF2.vOra/) r;hN+ey\s?63whc#Kc< yjqr,r9@YV1Cy.">v~%gE[3x5f9^7GGz!ݺ Bt{ WCr֥A U:$04'N|'MCΣ・꫋?U9 97̆dIl+4!BZ4J s_vi+8mER]ph~@L_CGmnZJM@XhBZ8Ϭ7aS37bLPieodΦHQjPWgJ>53j9S@5=2EN9 Qq]ͩocx\{%3|9q90x6v٘1ZY#k1E3D !٘gcYKޙ W䍗EK6L7v@y{Y6h h h 'KS喰ʮZ\ݗ#b(Q# (Zf529*䃎%^ N$`Ɖg?K # 9}"VV6 &"TnFHs[r${<דչ׆+Ӧк --\26+At{к3Ro{$,nZx,NYչ3K65. @ٰU}o)Mo6lAΦ킋 l[JxjɴӛLs90#K,\2@47)Kpsgvf9372)1B*9)̳ABEvNBHpqCvvK0*jyߗnF; '37r8IƸZXcybvN6@_U3wpK0'1P4u@:[o,^ة]HOډ8ecm@>ٕ'?Lr̕H:sf7o;A蔘ZFĨtj8O DvcEg}"b S;vfC;372VqT]-K\Nk$SMǐF5!]"^bAVR$ή(鑹#Vxz `VA 40jL鑧G&Ў==J!)G 02O<=}9ڙ;}{ Ь!I,Dٲ*vjn"S#v|J'4Mnj0S=M N6@@#Q͡*. q[b!KkwrEoW}b^"7?Dn"z>]^u}TA !Bc`8Jdi7)hܖJQ'w|E($/:GŜ:d46Ř%ձ)a6Ft>||>;77hxS4y6J@R7@s|xۗ_>_o.Y p6/{KZ,Jdf,VFj-ܠi%(ɁXn@SȭvE۲41Q# Є!؆N#M1#}1A۽+mі-u`3PqKe2jٹ/56nXKL%m0wO/dVAJ:fd؞5Z֠Q̍DxJYY1'IK7 zpzYC U잕-!Q*rT1E'&G򮈬>!|Jn|;& 3Svys <9}@ ݡP; ~j 0N%TS5!رS;s#œ:V$ )*vz ?jgs2KQlVJC}S;vO;+a@NSlJNڭH P|2̉Y5 6]N;vjgnd:Y;<, gĮXtjn-Q`F<„,6t+͋J譾[[=RWy?{ySIU?o/ f !L3(B؉AG)DͫcFα:v= oeD9IdPkq0dKSW dp 8HJi8˜~FE/?ZcF2|:zX>X*^ǻ*C`bљuyk.;~bEY}VWj6< ƅkr2F+(}W:ՇXowQ_ wgHruz?!~枷6V.7^W|5KxEQ51.~s9ء$[ߋyy;WGbmsQSb׆y,I̓RŔ];^}Gз*Cn{Jteoh$O׍D~hrtWgn,7/(+W/_B8G\^ Q|yj抋u{utϢ"{c?xÇyY{ȺS oݝjμ2]䭌jKEK+شU`Kj[lQW~~ѧVS{flkp}\a*;4 _rU4x4i#Ǟ R%I3mO7@"hȟRTPrnr8"YBZ8b_ p]`9vI&jrpt_ڞTF>J52muj7C$])˽=(grFȬb,Utʴ73IX8WAN  ACNoNO ӖG1N.M`A|0}D;اoY(S Ȗ' ,oaӝpQvn)D4y)ӷN7@6 "vBԥ.SG0*+r,97.w)7̳"MaCHZӝ9[ }hɩ݁P;sG1e Y9t>H@ΩiDZ]X)" ϴrзo]F˶ѶonMGgE홟/jT7 on~ 4J6;8j jwxgGx;}?Yt.7T?@BV^l(ҷe 雛}n6 _Ix]Lg⩒CܫX/ IQ;߻8Rwt$E61y<],voa=QE=r{dRq[UA7.${ kfM'uBe-|i4MfSøH~[mǛ5Л 5y'[lZFkm\`{du76it.-kT[nw%kոշ&MܪH __XR)uLI:-Zv>O#~;I*twTި^RH+oZ7j4\!7j";DZd5ޥ a*!=eHKFW&J,%V#t/6OƧr2#b~ )|?w4Z1Sf5(EG]\_隼5@_>9(FD|owsg.v; &+ _Tg 'aE{2QTҎ2IJ@@;p@%&ZoNJUi8Er:Zt\KQKMT$dI, %DhW!Rd3x&dR.fʼn0rLoP/xN\ztmf/}8t!^-5]E*%1J/ t]xE"hI`aƂ":aTjYN_㙢|B fڭGh spə@Ԏ]=P|e9//sInt28M M,Ɓ6LKY~-Ԝ y.#^LI3} C;+N,6w|xDzy..:?r)Ww8g 1P ~XԋԪB/ >Sɦ^By4Ŷls_0$+ʜf ~)%ÇD ?ȋm>s~OU=O(ή5"°䉿An*B'kΆ3v8y 4QhG B2KpBryu/:-T^V8G(Q2ǕZZ+ґ_z[Ʒ${T߰ ZKj_ g-T|Z\j:m,Hd@]郘p[XΞMһ"2KN8Uz;*лB%JFs vtnT f 2цX)&FN'RS8SEp7IϯK^*e]kb Ϗ*:8?JnnZ A^-lns^3Շ,1̰(d "ę,ִ1ub,@Rv \Qﱟ}y5:uЇ)VZuyܫK ~'K*ZvKሥVUƽ)*,tչ. kpc E(,]#u8q@gv[i[&Mβ L7%HhZj (+[{]%ҹQsZ8@]w7|. ̎6F:ѼXc wIg%+x_0bwZIAޕLR3pdJ! l" rVDL[D\\u[/:ko;P-@Q J5(V|ѭ3 M-Ei(u^RK^$8돖>r\R':qB%'^G?a%v#O<`/tNG.ο6P3}@} कfgGx+{@&2YA,y 0qБ.۟:.l3os4)۩$9{ <͇,^ DAAk]x}(CP2%2{8A]s ?7WeoPs۠T3z5 ǎ.R@5Y=L~Z79*mkf*'_Wd0 陙{ZnOzowo*7}LJAkxOـcPKpL]v^ܫb6 : f:\9 ۱~5MW;+d!`[~+62Jꖩ_a{ _iRrӥ1vwܕ*%crH9 rg$7q: $)|J,43]q'‹[N)T.d?(:ʩ|Dj)ȣPƘ !Jl@무5m|#srhhgJHҩN^.ozbln"~[@[^·'UAQ@p[!({Z e)65ꅅZ:J4h by UuHfvTRƪI-t0I&9ٙGogW0Mu#9x<BS.X$dʑ)`gHS`:{bzPզkqm_s qf:LMcY4xy]Ν}p_Yp3 R9en"Ted{~[ 9>]Macco<7x=a?qČ6|LgO <9Ow> ͿOQF2\O&O}0T8F_q$e/_pLh;/~7Yşgi5 93` ۟$3m~,0Ps!p*(P6AB(D2KaPӷ:tM&Oy쀁tPL[<7{q3=eW:,'x%3w/.)逫1O9!(%!Oy6 qm ;5 $v"eV۱L"DN+ʝ!r/rj]"y@MЧжǓC'2e 1aΨ]D9F$JE3-+v2f #LԹvDy_v:#9e0W53QMЧΔJ*(Rѿb }]jXZjNGO~',@\_]8\c=ƊECW[TrP?"Jߌ#g3VQKɃtƕ10&(>bX7@8?W+u go!?" osMb{_}y/:1>c|^Џd|5FM{G/}b >n8F-0%.d#VT1cd2@̕f\1zUp{6]l"׶<͐bC.w|f.I_>;θ3.K;x;8; $ 0a (K], s)4B$B*c-UѶVm/:$r:ߟ,iq/ygIO{ŔY7Iߴ]bHkۖXw1BYa7ף S:c8IDZ/)IxsF)E16,Zɂ|?(2HL+)fadRQXBdw^$Zf,NQ!9XZR.|(9\&^O Q/,w\ E֣k58~9"++p )(@K(5RaT@fDAH\uD+I,B4B(A& cHsMΜM/#0+(b $ح[t?p.(4iz0-!l|i=Hl\TPnY#F bG>'!fw.WT1rs27%ؙNeD3WsƘ2B`L("vD6@L(x1^Kd>6l/v -`+4MI@Pz'.M ?HqjWAu~'tgW"D-F\Ṣujs쮝RK9伒t6 o6 obZi9[-%k"FS<NZNt")pG eSٸ$P,pK@3MT:RB(ML$* CTF:5n LTqQ(Tж`dEŠ5W0: YQjxS" !; SC$ķ_ {|7ϲvʳMaL|oC7gOirV "a  AGLƜKִj(K4Si76SSVUHsjJoHJo&9W R(a̛Ń [^a~kK8XJ?,^z.~|,Lz~rk%?{ă޵q,"e/]U8 pdwrLJdYG@V5$GCJd؀Kpf?7*-bHmh򥉁w;my~Ȏf7y/-(jlx-W+w7zxqs{wo}RwoS]M(vp{W=~\zȊm_:x#Vz3Mc-3LGpL޲m]mW <xq.j@aN +zjFY{țR3Z\ Nm1BϜq#X;87޹!lĝO Pw Ì?iP:@lgA1gA BζÅ",BsTN/+Vld~9V'Ce*Ɉr-'n1aŃQ]S%:eBq=o3K1wf/"kT6ʊs/,<%+vQ>oV _5LXE;VU. 5 Z3X@[f8,NcAQf!wzd͗:.!KqX>Dy6h hmU;ŕ(6ڭ|Ps y&γ$ Vcə{(?h B^/( =dEfeWcz\#(C{9sj[ocvU}gif~ѻx*|*ZB5sj!i*~xC'_ *(˚!ڗLcu9 {L#D&о#Y9FMFѸa4jDZr/#6ں'b%d xDC/OT6z S-O8` f)gyeAgBqI{,y6B Cjg1+Whr)-A];%S++|OHRFS砏C fnawod&?YW¡?U>QU26 }ѭ@yUts*()Bk KPѭ `ggE~)ݚiǤ[KGC `Ut[?}FNg/^\ py |||ι-l_vTh&yǷ?frN!7ICu &_R QFԱ dV;Հ9OlX"uxƐ '7"o.SG$L8@LmI'1A՚^W d ɱ VǨ HmEE;}:'\|H28!7Itu.-8%UoOM/ E.ALM^ tj@FYtMEɸʶQ̩) Ê.jQě1&)AkU(SX:xeͦ5M-hdnhegwP?>, ڭŒ!@\;v@0C^0PEP\2\2䘜ekWC$6ԕ+HW5*UE9Z_; \ vZ !קUÓ1g<ؔNf^LjN[UǶ{lQsW Ɋ=ޣʁFiM%p|15kT TkPƦ:s`ELJM56-GkWSryZ:[hni 7!yQq| X! Gk\p| JXjp|ӌيѤTEXjtEl}85@)$12*bTCq6%+tFZvzkc4:z׸#F(j'Fj84AW9[A[%E9FXwo#!+c"YJU3ZFN ' R+{6iQ-yHN< RKbc^ k~i" *1MbD^Ybi< d|+-k8!̷,{\E&1!G#鄎q;QԈ U X Eer>͒flLN4V^`d`M%bfD/6ryH j׿_cChZ3Ib$v ./\Y"uh~ghm#/Wn<8W,vV_\0qiW4@|@p<( Uh= {t,Q~q18y-Ȏw3Ms7mʱ>b**cF4>=1ʊTE-WiVAYmkt%+z1Z5XC[Ӈe}[]^cLoSE\Zm[_ANo_Y!ⵚ#?MN_6I}:Mb&:}{vթxD|KR`Oo.]s}x$s%|4ޤm̷WCV1pkx ZYck1٤ ! DOK{9P vIQu:oq'NBB#teV"jg]q'wby/>(l|' t/@NvnHzG]u{{7#V&DlfagQPyR!8*V- 9A"*9e۠*^>Q>{ KNK^tp{;=:]BQJb*\XGLPB%gt+E}ɒJu8s7EB/V緈t凋grgVvsr3-)Rg6/d0fã; i?Z6&+ UأB]4z2 FѼyu;9|"Mr9ϰM^1jr@{.{KG%WK/ /+Pn.]śEёkft 6q4Fk<\GcJ0h_*iai̳p\JC;ݎ n-^|ct S"ؒ|V{dXY52@{νӊUPE:%E}]=YΌԢN`NW=U{{kǣ5 rkŶhv=KltfX~gfwtЧnO-qCZu|Z ]Sj:%AՉ.X=5F>CZX 呼'> C3Ĥ[y娚PGLyd RRxe g:Gy!NЙ{2#"{͚u^v24|M1#/T=iLFH3ċAX exrJC71(+FẼSEH42߹@e6%do4;aNٓ aNjWT|'z?/f"!A7^Hrvg@}ȵp8 =2Ofx].;≊ ROHoUH'{n3}ש51h@ X>ӓ/?_\=~ҳ7~2ǷgrόŝnNJ>o ҺXLJf/݊yϯ]e_6GMMvlc;G&Чri-z .φ 7Aq<Z=w>"u{5ϩJ3lWU:7.Ennu羾zr`=2J#bqTT-z:7T=!Z[`*-^|3$;|aq;E_; bǽ\*zb%Ty~.[ND$>_01#." 3 bȯtdj|ݛE}v<>Z"Li~qaS}QQY ed6{emQZ_=U(7;&mV0/whd׋yl)lf>RV)݋v]Qa*Ź9ޱvE7ZIY-7{Mͽ\SEȭoۊ_GQ>gi~AVO2{'qpr~䘒\oYL""b+rж o8۳3W{`TR@@jwuD,;9գU^!61! 0j'k~wvn"MlO&>?A _[c)sv+V[XLbhuՒj僂zꏍW% 4k3v]c[7ذm0Y "m! jH˞N1mMX9Uz(\GL bD=zggͤC1G/ziح9BXfZ(4` U :aYb9:ߴ"W TF::Qeaӓdݭ-Fƽ$k-WmscOMЖo U].7 1[P<RyC*׊`-v5sPWƧl#+^C>7 \r$ 6zzzln(QGR~lgHIE˳ı=գjmvQ׳F?[pACps^<Qt3ÖC%4S)v]_`@E*qDb!J$<ܳ6Jɿ%l<*|>5 F1,9 P7s1*׮Ҿ1@9cc8XcH㧏=cLX[w96 ׁ7TĚGwM>ymn`{R֟4)B]8pqn/ _I sOh߱p \aSU-מXa m#cK*DEJ"!=S㾧܏{p%q+izܡ<4xhC:,Lc&$8Ȼ'δO? 4ƭì*|*L"\ |Yl͓VOj:[ CѦA.n=ttm}!tw0epOϼj<;L |&Sqzr4RNS} ۼ]QQ[nVsX.8{WL\Ñ.:-&9 B/$1niP}Zƹ%WolCn5)hקq.Fxƨ LBJBSP^>^jlLxsBo}X1?Ƌ: x|cI(w*t($: qZyjW/D4(3"-'89$<Ӟ{%ǹWph/(=fH?bHަwBz[SV#驎y}T>}tV/GMpTji:p"veb5G:e:e<>)$PU~ }A]tەd4MEqʬ]qaPZan8`d4-֊R3-ɜMxEK~nj{|EȎm@>ַ˄:0ΉఫWͥ< ؛M"Z}(\ƞ=oA}?wK3Z:_ 5&3j2nMf9Ct*MwuQ_yg?adk ފ%JdSR$dR1%aݲG`Q3ߐ_BP*xsϚ#:cq༵&6-h׏$R+ŗvDkHUDCTZ 5fŸ|Qshļi.O 2B)8R e ?,vNڻ%S SÅxʫ!rc]_;#Njj.HUݶ*_~zקP#(#;bO-g:1EQ`I >f ^%$0gQkI"eBF FI~.t9z])®}o`ޔz9$rCIΈS '%*1`ԀfhU90 Aj1+c;18{~#>I5gY4GY Ɂo9n%?}kc?+!FD<~NDfe]|[Z9ZR}{!kִfWcs cy<^5T/1Kf'rqˎ}SS⤕0fV+ᳵ$ン:y7` Q0WYv)ON-7TI'EƥLn\,H[dۄqNqH&Ha\Sj-%i&t4q\0MNXsf60&4T)1;~ZbѨaE׍W(9ӭ!{ y\otlE0L#iސ` L<:g}!I׊1CF Kbq"4cP^CmC,lPMCפcF ;T?bH*'1PL܇ns,¥!T Z쥓wF2S%=F5 Гn O #ZwU*CE9חt: ܞCޖeMļKtYtYsݮ1oxDǵz7tA[es'VTxrF,8Ye|.|!f8[<O, \N2B+vv %tMgbBx'YsbI!EK}7M%B$ީ@94(~J|vM]rJג.EbXJheFoj(0%RRR<6(,dGcsp?eŽƶsA!2i܂Nљ,3:K S)\Znlegs?ad`v j'Aj)ĎvAkCV:TK85N)Ik|f\\L))-Ih)5F# I \5|>2Z\:ltVRNGRK,54ĥj'=E$,`=[-ڙzb9x0MDC\a3saF^(2ARe#A`I(b`%ʅ)kasDd_nʦjPfU$‰3ũJŚs*Sy}"$XSAeQ'2%#eJi}oJ2v#jNҽz4Mv_Y20;OIȞ f` tv.!=k-0gu5w/?L`KIx0+kmRMqwS̿4C hA:@!v㪠"En_P):GE F'`:I2pR+BfL*AeP}8–pq2RisCl"խ7Gڸ:inm|ߍ>MSCk{酷a 媒-zƀd_{RDPU=Dn̲PA ZK 9EqSExڿ/` ;_\ *nF},_2wZAuu-JV>( f;HXaKl@ T2DqQs)32,Nd2 ~!0Tz`kMg3X Lc3+R(+g? ^#Lcu7&)P;X~RB wEk [EYAJ<#F媣jBƢ ++FhIk]E!S5Vb>MYS7Wg-Fv4uZ I'rUS Ql^ȕݖD%-N}iN /[ap:`9!J~CZYd1"HiozR5Hz.Pc Tt `nUZpXf7zYX CYn[0!=bnӹ{E7TCV7;BS8tH3g;Lb1x?e٭쁴73P5{_y_\N@޻e@IqY@jܺS@+T'Fypx1PRlmȂ=Ě{%B iw8{&A"LUʈ*\M5; 3D5ޚt;NU۷ŕJ+H⽜7'+[C $M:AY;H~)VC"H+Ac5 Ƀ;?.L5rO\ܴZ~xm_aȏ6泙=eULw5 ճȠlή![Bú7Ӄ_vW7_^n ԋs$+VKI6?63OL4su1\.aBRbR.Qg֛7 gGr6p \Ap%\cs,gs*D9H;AAZLg 5XaAGf:LX}G Ev3ɺIRF̤ 0tncߍ :rWTY|*s_Ild!iryE\%& Ы:t{Tw-d1X҄'d(ՊZ'qP.Θ];NƶwN|4d:w̆?{ܶͥ۾E3uNMי:;MFT]l/@R%K1)$y΍O^k-'`}ȓ]JClGoˈG~$.s[V!sr|q'<̔Z|T;`]B8#h3|*IkmSMg>9Rゐ-ۦoI\}`67 ( ( _%"վJI|RiJ(s+}\ü_b7^ 61<#gk $GҺI-9'Ȍ&Qz$N#sƨn{ik5,FC4GexCArDɣwF^:g]%io&JY@QҵX㟏:;v|G׽eOoa'}ozSײZ }C+i _R΍8-59ZKt5&MVYwa|9UjE A#`Tpaf?ݛIc4p!Gcߘy;Q=-wekSuh>41zxx1sK/Wq|ٿvwB-Y F_8iS2=M}5') JH,Cy?G|!m= 6d˱&k5r>VQ$+gWF@he:4ht:$`0Ap j1+B` Iur6c)a+hqW -a:]y5à$]4m 3Hvͽ )0ɚB6U94l\XM@¯N Oq VțEM%Om/8A~$axxDV԰~OVhq&ǃ5g͙x=,,"̲=0shk?Ĩ[ڏ۾֭ڏX~7nU@-Y@ cA te!Ee?J,A~l?&0 0`q}%0R l(Eat?pb#[(~go|AWLZW@A xV d"HEr&D,?6h-(h^PA6h-6- q@? q Md([y$O"nQF #rڸ< [bS0o̹٩IpTvZk'> GF-VZ̄}ȅwrp .E.^jQzy9"iO#3a$4 bƙ lh}@(@" G>xvh`Ёٙ~:/TeJmFM\v6N/ۄʣQ \tbFDc i2_LKFHC`- 1BxPipƟ7"^Le$2|d;!+a&sN17-h5չtt\7{#%q2sP, ʗ}Q Gdu!y{$GI| oŊq?_tl)uXSE:Y{p-1Rx܆^8mpz{5JZCŖڊ5l|wHmabk\!plU_m7' 玣@ !9"tFӪoSg;zgO쐅`uӉ(E9.NkQ~yG׽eO/na'XPkY $ yC51=Ckڽ;B0MCmf#vKvR,ẁV3SiD L@!$Wg!Ϊ#ַ9nNZY" tcj+dJ!-$aM# i "8V0ټ+L[ ɽ e2Q=!mZriH*@mcP.45\PP1@)%W$\u9$Q^逧̥JTBi$ZR,Ղ(BAns !4Hqm^r#Ya ZAͭ&$ B(PA(ʹ2EB* ǟ&N/AƬeF`k$$&ք_A5V#(@PRy<{uVW=O9"!v5◂HB !e+pYsonz),,E3aQEsucKY| Ae?ڷ5BR/hmHEpܨo65f[Wk^GmְPְIu#//XzqL׿h%[풍0Q3DeP]=dSVl ґՌLh&=Vb,h=\BxHQ%hDhvXӧw1: 054mbaHX-&1M5!$OR9aJ A0L'f$6& 022x.0_T2[~un$ʲI|Gt5PS<ژuI8aE.B\`% FnQ<R0rZMTV0CE!"2ZT&"0&ߺi$^>jGݎ/6w Li5ILLS+w5 w~E*7o.=|`4J>5t38N=8<κ>,SȮ9~s !ONLj=Ҟi a/"aK;dºO)YH"(X3FF2R*tF^U;v w#>1M-d8SWWWA^I2[20 ! #IqP1 `ȕÀ@s/ɰ%h\UAU1r-)AK4Z?t}5ǬARt:>9hfF)Nd_?+ܤ" r #1JJrXY> cKǦHBk0r'N0(0 5eAd Tx"$c'|ˬ؄3|b}.I`X,a!}L~L%8dmK)-{ oJO[X"T2e#B8' X {˸.|5މZ'‡`.谋m"XLrnG"1oL8)h;x{:":>={:.Z-lإpZ>,nl-8'>ǨWP  #DdؓaO }Þ {2\ g5aΞ ە!{2^k;{m%ϗ=tij- xVRO=nS J<dؓ*os ;{-Pk.tcRHJ ] v RJttRÞ{:Vbľy:鰧åpLG_:L4d yD!UPsC0atsƸÍPt6WҞ!ôjFW F鰧P{@鰧ÞF~ş0VZЄOpDR52K7kt^ @lsQ(a;&s8L"Ģj 3L~)Վ8 hP|1 Ϗ, XJ#ȎANiDlsFrP!@!#Ģ`{>1=$։g.D#RXPNǢ簸ZƂSb˟.N ᨟ur_t?6gC9X5BEG$xv[t4"i@ƶB1G = %(7.b̾O-//r2+6P٧n[QxCe_cb!11{ SK{ )O-}T"\bF@ cbZZ ܙxGp}RzGg{k H)ْ G<L<Ҟ{ H{ )#}L\f喔0Gy 2NA -\S*wU eWR|{Z-/L6盳C+~ Q;ckc^5i`'< a* $b8) N$]K d8$*`$)3D'9b qAF1@px,a0eH$zE  .%_`VF`fU)i 6$ fc5-H"Ri4B3@`Ĉ3G"%F<$1#Pgi3s܏M<;/:/t,Y.&ucfSu%@66mm(zu.n|发6OWUlוN/uJ``O/T N y b/B1B%Vndky &kAj$v{=4RN46Utbm%\|tM{%㘰=&%aT@p썵y[lF&qI:?77Pq.-ԜKixʿoAh珉/> ftd@zըr coN81 uSſJTcɲ$IKA2Hd`BCaDjMqP^H2$y1ApGL d@BưBg VX-XXd1tA~ULDm(O/ԕԡY?C೬r䰱?~}R*`wdKweU]I y$1S*fJ94 20@1, Q)3%, FW|@]pz`^cwz;X}Ѝ7.{2_>aPFXq[1% 5 c6s*M0 ޫ`Z*In*{"7˾Vs۸I֟1 XJ}2٭sXba}YTL,O)d5ʝ.fL&0BMMf5I %AyE?0D۲g`gOl`XOζ޾Qƣ=POaYs՜n%w,^Bd A=O>Mݝ,8D4uOIk GMٷlʃAeC`x_^@ 1R Q-EoPsuP=~u0E={NP8xpuX$ĭ9' .pg3Rp`AxJ@ 1\OrZ .CBJ7&*8x?Tgy#v)o>Yie0:%^΂ta<1ݝ'.an%I`J&fP0aXyAtc%1M$'4F0,c) ȼP)gy_y{}5J>}LwFwA'c_Ϟ0/}vY͔z'+Ѽr Ӭ?.n #62i4_PѰ8N__hu}oF\9o 2% p_Yx^y5x^y5uCb(Ȏ 7%!Zی}@;鑧Gyz鑧G=Bbī" l"|vcOgWɗhd>uWKެn6]<]^̅/۲y97 znXZͮ\EXxΎ\1-?{׺6_E]lUq63I%:5I@h"K:$HI iR#qDh|5$V`@ځG_z£Hw?,}:+|n]}9J5/TI +WY+qy]ܸpy a I.GQ♖墡9\4P s]9XԮۂXEVO^=/Ow_Ep>Aq@n1S&7&Vlt)t*^@W8xW8 ^A[~}q<:Wи|?ނ,{0غ1d >K8#u_V9vgOzL0K~0ލNv}e hfS_ڎG7Y||#g;{+PN@OQ*&Kʟ0'~HŊ)/Oz?`-2+Kz50A%\;M2E.5+p!B"A>+egl/;EH0:PkG>aS>6an&[b_LqT(ne[NRnLPɌlUj3>jh{1^Fh],'#Xkf|, En9u+W~b lS"/\,B/'`GN3`3%&m, ǰ< US"d!*`hv~s#)llt他@ՌMr.+HZo3ߦa_&UZ m­7`M^ 6VC8v o|% Ƅr&*RLS$0cИF&!0SZIS"}顺:=*aۙ@g`M$Bt4TvsPkMZ>t;y?وUJR~-;ʍ?b"aK'[A3{bΥZ8Km2[Yԕ$Ij(%|HEw'FX!u+tq3֜&6]lm<&CehR'GL< z ֶL kihƓ+m: 'Je].Kb2r[Bj ߵ Vn*6< xfV RU&,-:Qf{$ b]+;MMDii942hqýg.Ñ7^l]ﴷbUw%O!.)ݬ :kܭ׍j޽לvGZ7Ny^ KNZn웻{Hݭ4"+&a( 0  r Iƍ"qp0[(N6z)2Jڷ_9ܛh{YlN'bsI~טN|47v4mc.ӑ"Y{NSnwzSy)kPWt1w&~4:\Y’hhGB6)klh7J+[STmn;ďSFM5hUֆh4":Mۨvhgj]|mxU[EL Iw8B}nMiPEtQGxiڐ.dJqnj72mJ*Ӵ:mw~kRmkڐ.MdJ`B6{ݚҠ4mv-SyiVnmHѻB{hi'H,'vX 4 'pTd9֎?Iʈ*"3)-E43YS3dgȩauq~|cF" 5m!}TTꠘW12*JzGxdzwbr᫳)HvneEp- X( 9ؚ| l|=9zWW(`Kj!V<~4f{6d=hHOSw*m: ΏFv  厝iU&Ё0u}n%㇅wa2$cgj#ЉG\30hu^{%T6H#<2մ,xzv:>= ߽GzڟF,{2> IEJK _F=7rEϠ̞恔ES'Azϥdt67PJr`Рp71?&ˮ!V~2߾ T3P>pWE̫˫8`$ގ^z5]HqXOSiy%U.5{ϏQ80k WZ??V޷rK7Cڋ‚ߋkK<*=ȶXʯIX(hQg1Kr?6l(9j!RزۢC@Ipcq;b*ՉwD38@+'ʍ# ND#' ep'Jp\D0ΉRk͉rk|5'J!o׉ro<͉rנgbEs}6{(J9MZ,$M6!bŚo@:/-=2eYg _ݻlϕ{bi.eMd q(d(rD34CɴT RWB͉ܕҪGs -y"!&+JtuƤ uLZ g01i-R€FCX1 hhhHVDC! ^QXd3\RRƵTG82b$G4KLq!dMrfTq÷+m vo-0D)gKE#tK v?-2ls6Zm/ຢ1І:W/e-N,Np_DD(Jc"3s7 :BceCCR׶k-!nĶqm-0C*ӻ(õB,XخQ5`K7׬2*)H!2i&w4 4bmmhC-E` B&XDžϓ_w~<5_,kI ۫Ѵ1ݖʣB8| m::~8qO^*3ӻbx8M҈/hʘReM#tE?ުίo;*~hpnӼkى&?a{5J޻ODUh,KR-+j2AS(2 X'X3g4 :G|F٫0.G_/rpz6%~T?{ܶ_vײ&L6qznzd< ڼD|fԃe[D[՗Iqpp_eht;p7`^+z~)X=/ Ce!~7DE@ [ҮsӚ'Z8Af&N`4ՒZN< *rkԊBU?/DNC~W?}x!,~8f?K3ͬo?#˭(q7S,+ƎtSKcֽ)NC7 Fn p"zW6|TŬXz ;)oxT3A(tC?'QrafFhg-:Dލڹ70Y=K p{ o48 VfE3R l ZzZ5P2 ulO0j7v֭$I?̢:vM: U7OtwZ֥:Jð/*/ tͯނՅNo\CW6~́@8o˳Y1Chü,#;4T Nۤ73#?dpDd7D& gB17;/"1HvV亂ס.>u~E( ;<"}p/fj /ڽ)ό"g>Yťus% _Ȼ|M튁a6@){ vi"`nc|4)8d b`Ib٭L.*2;`C)h,SP2ϖ)Xb/<Q$V>a ![`5*#럸 MgL ǸQbUHYk glx 3tzm^n6I&1X{bJBJuƮ-󛒃 gd>LͲp0*{جf`/tyww 0mP Mi"*n/+Y=X/50OnVQyTRo)K |I[rV$ajg uBS,i ϒ z0T*M] Ԕ;R K]MP2E ܆6-k )}onMo[ЎIPS)j-e]˺.ϐJ9Òq3$xbn8]!CtZ,ҥ K0cAFӥ8bevlgXS4U$. t0J%@M=>ίJ<hie*0K"0HR̝@Xr⸠䞏O1@lۄ Ca)lSg>B&Icxb-qD)]J :%XKK8Jo>qMp`R Q*>HܚcY  &Bp |zAkqjxOsr>_.CJNq`bۧ@,u) G}ڝ8 wQ1";;nrao3]9DLwxgICtP"EL隬Lm'/rLGcj9SPnDX,I-׳jlD<ϲ(&I H%RP崋Czc)-\[k MX0j1ȨxKUR'/|1KlU֤) EuD$V$3G1vQےKLI0T ZCBSX8SA121ЃRĄoC>˥(j{1;em*-s$6kf?E1q  t/Ƃs&-C%$`ض(dŸ_BA~ ?y'TXUBd+ǣkXfns. k8\N ے8VNgmA`"1ctiKyĉ)v^QGN) L $A!4 B42% g0H :0bD&.dy-?y}K~ێd' ̚W}V=> 8"1~+a p+(ȁ|edaAJH-5Di"m!AQBJqF7(}ߊpؚ,N[8^}emBq"ms𫗠9 lĒT*5,ӗ~W??{z/0fѤdy>3Ԛ򪭍?7O軟p3ymR0O@!}%cgO1{>PZAMg׿"yI~ڨ]\mo#kx5:SztCOƒ1ة6f 3AxrA(WmvָcDyO6ևC3CzYh?O!k'6Y0 OC54Dm6jnf27ܱqZ| JA͠{Z%1+"}Oe~i:>Z69h#|g{bEGk7`('H=<76}Jk'u. !y^jP ?ګ_eOܚ < }2䡦%txq.RAg󓏷p|8Sawj5Sd%{_1ppX=jEe/;74wCao-FYY ]lלV7M={:8!V01˻ tE'{+{r4Owuf}瘧3{"Y zrE)/ַ>_*֪uz>1dؼ{L^l`;Y G /Ō7.{'j0b*;(Tx\Nۛ'] xA==9ruz.:ʽ}Ր=(dQX$gH]# JDoxGiKbISb>&#܌p=Fiѓ-i5!(ϵ YR5`uFԢrkj=)FDZV޳+sjZ7![rƪII,: (Iy-5J-*v> FX³WgjҏI?~L$\㽢,ӊL'%f˟|v{8-ʆ8PI <|x пE/`&%痊jj-- T ZQM{*C/j4li e5p_wӧeeI_+ Kutn1OM˓*|q. TzJ RS9ArK-U:UNtJ9R$w"lEӴy_QXʡJA\ Wѿ\OdKYEan y|!n\pN:8uZ%^2ǗL"N\j8qؤSdy;MH0mD[4i'QPiB܁W͇|\@`L,Lc[ԉ>ńb1x6 +=/!1#`rqn=t5U{VPƇyq ӌD6 f<6opV ~QvKW;!pSB |7󀗊 N|[moU+NA:ZE1pJ '2nIc4e2&Xs"mrq`jZ/EmJhڱ2P#ds F@ dc/sX] j]Z>/y|r@ڛc0AqS f=PQk*_"tJMFZWυE.pXdu`̦ujQ]l;@k@McbKBGTsiH1v;+WA V˝OÉHinw?6liϸk/&{10+Rp 0'ښJ MKt"h=:to0!x5:RH>Y"+Ru]6P{K 7rW{Yj%V5YA}7,ֺq tM.P\ ?(ă7ֺtkY=BAӉ:ƊRZvI6*ߤ5h.zZ?o*J߸ݾ7Y>îuZ81ܰ ׆ulz855->gK𫘩C4n Y1X5"K3tԙ[uSp<c6(ʔjsX]sE1H*=4c8,Y53 :AP*^nj܀0UOSl %Q0 ?c@FFIݳ5}bK{+<I~ prGLP&>#,e32 ^Ww[u+D~bhf=~{:z%O57BڌRmW7)>gKVxu:^Cz5;ܝ!#9x#PW)W/3NVe$8~"SFք#h GՄ_333i,^C|kCagwc‘ؽ%Z@ǴAl1hBY*7&$=w9RHF{),w7K($(j`͞~) YQJYkM!ۗm^jY{գ6f_6Lm##m&ݒq,747A'Kȿ wѾO>׃$վ0 9 qerNJTEVJVn~ gt+SfSDƟ>~{ (ϰ #2JlDc7 ;Z WC;0 e=D@C4H2k]VfImh>be=WI4ܿ>aˣy{V̓N!.Gui%k/~KC09&>ni(.zo.c#d*0{:-!= uSTTGC]J ӭqDXa!(r'#E@Iz<}%X.VN-%[ WZvlYJJ8j=)c0J*v Vf]yk 3fJZ4_[Pu0[r iFzPՓ:}A5^>ۇ 0p6嗻9sMo6ǩF >%mvP&;t䂋cs50!rM E+T()QЁ5_7>w!SǹU< \X(+8_ a<!f't(j#ǂ ,#YHH*]hK}!aH>ڢ\3ZXJJ(N=?=RHRQqEf*JY<s L-uu dqʧXC!t %n=:ip#F032~TNTqJf!ܙޒ%;8ȨHe_| ,p`@go^f|޷@S} j@JS%Ivt']Ҭ<"K$$2T\9`B(`T4[+1D5G<Ʒ5RtTԍt/𒣃zCXaxNwj. F^g_,kPZ)\Cj6TVEITl_YdPs}sxM c1Ơ뮌Q M F܄@QLD(t *ta#܅JF !Yw4XmGN ,:jK*=J,m+UQ ߼:"*/N#jCFY|0kڼnn;bP {p xt2r9g'fi$`4g\^^zq&t4k@Ɂ{Мk|l{ ԫs;493 TNn9XLwX Ƞyû'8uE@= 0/7 Tùc50cNfK)mhF[KBexre;7P&N︱,;2@T< ! a)%aޜ 'R_k|ڭЊ"A"\wח鵅]EܘP=m^?6>ܔH襾w;\NB2*R~->7$ؗʮyOe[)m'Gpo_o1f.wl_ -H) W6Wf3^ ]XˮVrRP F}|bЅMvOAF( + aQ"R ߼ׇ2T sTk "%eovESU }OHؠ:"a?g4R  /Emph8&d22qh^;KI }1I@. G䤳)-<$w=3Op "0HƒY2!ˤw|W z,];t|n>iqA=L 5'χ! iZgU|piޣJ8/a²uZ0ޏiՅK#_CWJ ءU!Ru#%gA=:ɬdYyxyM /kO9Y_ɯJn'YXl~V!obfbDGWq1:gwnK$b'xa< 諷8J@Y "u }Mo_G*)ѷa:|bzI+MA8c۽x1G;ܽ()ڏH?VQ!}},m,խ_K! g7&gI٧8Q/@,0Xrz?V=eI`՗gSz\a:nd[:;܁M*' B׿YyH]8nW{ܸ'\p"hLS.yWs"X랏J뼝L.˳^ao^isɖ䋢R9`|^/{'G: M#A&@՛78A,e?h>@r %k#oE uDO 7n(-}|޼\c_zsl4]\-aү@oXV%#]2Pc=| dHdL 0(!zGP A`%,VĠf89 8a(%(T|=w8Eýcp-7 /% @Lj0 cx֫,ҙ]QEc.#e&Sr0%%1#\4جG9MzY}k7R'E%ET, 94fclZlb}}էNf@{Jwf7 1.]SGur1;7qbP;f?@zف *=ddl0h ׆EyǣX jw׻Z <{v-$"ewVd]5Cts ljk{W۶A?އ;E#G&0HV+K$9͒D]1)b\EcQ+|f݉Mv0vڅoA^uߌ,{};B{7-csu5x\Su(jV۹gl)>S3N H|';K%nOh48{ۙn'! mzgX#gXJyQA#1z8FIsfx#c)/P JTc_)G} )2s1LC4hd|a'PkL`)b}'C{rONd̓,o8]x&Nf0GjtXG4kؘ*8߅%c}J˥%kx`?t^?0nI;{|9w4=ԉI9^p  /<}V/ݪ[Lz_N#"B0}eTh Q BñB[b-U:&71=9wr7xtC;_ bC ~v;q'Z3NN|֊Ds4b߹$$j;/?3u_8K׳n/S{u Z?yߜ3)*ݯJ.؇%* k֧!6 B@B0k &1kFF(nyp0\- n ez Crm?4,%|%9 -S0@;C UldvT|RNNU: G20Ě΃w {7oqrV6j0ʶ "Oʹ&[! trHFOkĔqa b j g>6^]gdS{ڝXT(®[}-{gD2K4o釱SK`!K㱻:'O b<2`8p"1!3ry#nچ(@~Ϟ=EEXUS:K21tsS0L"~:P?Ej!޺5g0@O d3R!b}a3E(}*(a: . D\RT|P&yj5eѧCZ0)CB%QU mˋr1hEּ(1༨CQZ0v`<_^TNM5 OXT)Oem"ƴVZ|]lp1(bM0::`s " Suɒ%<-Y*@$E$OGQJVG9L3Nb}7FcD4<ĺm:ǯnM wz(t:A݇Ń5H7Cc?1V7y*멧,ӏ"bcdHS"`<0߳MDH'n]`ts:?]&3χ[܃/_·w?m"_N{DMg^"߸NGk%w ݂Бo^ z?ڣͅ?-Q8zsďNFc*ja1 o']5Y\2= ,* "fەA0'bmr(c_^C%\!~!pX>z$zk'7ao̙xޝ̖߹KYVlpx=a!3˗I45 /ۏ ӄ;`gF;fc48p-i'+ 89/?Jit?M G7dY'b<~6MϾ"s~5 ۷7 T P>p%y[Kx8oG/GWWXMϥ:a[7GNK̴)ŕ~h,yW&緶lVGøjVz, !}XB{߃tiE#ZtK]Uw#NA]Yd3"Zʬ<d*94xdʸ{|7|g EXN)˯,֬2}KQ+/Ya[0gn`- % 'p㕻0(؆ :cጙ焿tu8\?LA}嗾BuTF1Bj^5r{NV[j߂-iY6f WYVw'|[a| WJEԅefؙ[jw5v%㘄ieN]9fN~P̐03X*' *uOHԜH+4 # 2tDhP:bz?4MG63NI#a6Zx g"#pEf׳%"XlF05wGN,D?Z2!6|A{s16}]$ 8&?,"p4KR;sұP)L]!R76}+QcN S*B@Hpm\ƅ ?^nSk6.|4.L+hGd$#c81! 1/92g Xn.0n2B"kr P9@&X Cr:í3:{8óp:í3:y`6fp#B 4]1 !"Oe1׬utc9]:YDsPPr\K]ްvO?57St0n9Fn}yCOoYhp{ |n?uTXk>5`b3BA57 fR!_K6!F ^TQ_JLޔiJG8TB?Y</އY~*F\ڊk*_띉;h4p3^zy}nyA$+.IS"Ki$[p#UA~[,o kŐMCZp-}}44}Ȥ) KR_9D e-ej,J ´EmIGJ׼߃Deerی e[qA5m&=8qZ>->ZZY[oř̵ֵ}%ˇb>D3#]ne2 1{5j!qMfs/~lQ퀭^Fy|>kW>`+pu(ӌ^=u|eBs2G ֳS!8BŅ4oLRkBRuWR>b+KrVJ>?ȟ 8@RHYș⫌|;kӞt #TTpq%cd魉6}@a#737T&i1L7"V+;Dg/oMCЙŻ7vG? Ԏ@)|X!XRg~*JŀI ˄mp .͡ER\@h)l NKbIV*^&hg)HyDr)bEɩZ;;Oq Z7<%2ߪ ծwDzwqeT1kkpNcoݜd|fIgk}93āƔ)SR+/PjNXa cfHcoI3a2c5~}`d(-f[42;H}wD0/Rfh+u[2MPZh{fmQ'Xn\']Lܖo2kFLn7k*<9T=LZtGO_*CrLg&w|dk̿F ƘBV~{ +?h_ ؼE"@xJQw1`nμiC 1S0&? KԍWSr`Q שxB_o?U SHJGR%K;*UI!j#Ҁ(Xɀ1m5#j7ɲ݃6T)R12QYMSw=SR> HWm'n+ŋy-؟L!WJRi7.*eC'Kzߙ;\`5:Y :~PP*{PTcRboBAXPL +a,6uH0AșO,h*.,LS%eyRX5l&+^损RAd WfqPbEEMV?yTTc5Ղ(1z4yNBN[')_ur1yU:/J^{@ w>};@=~OzEH#Mlb=ӟ(n'WUݒu׬:Z?ٴDOUD}¾lQ/< (L+M>}_;%Qm¿A$9#ZZJ/J#J062)ܤ`t=][pamj{U7\3[r#F%^od]ٔ$y\ X'@34[$kknHŗ=YHsJ9r*+N6/>06)2I忟"4 .&."@`鞞n?Dn-OF.!͒>YǝױBA\%vd]La< ${q ]}1 bd&s &5E\[fcRPZFc'Q잨'$J&L9je\?dm:A]|0@kfsmmIY`` //?Xah[s5fR" l虏dM6[ܧǗW5>O\Wx x/ZJL%_iL ;.6&0eUAz\:}PܗiџxX &27\w`GϹpRlA7r"R~$& -}WRM8J;B4ZjjEÚ#'gh qJ>cHtICo eJ(ز( M`$1Rqq|WRd54mjb EM#$>j/\A4$J>-Vȥ-3#wwf1PL{I6v<_#wϗ{7pw>xh8[!aaw ~d:[h`u0 t)~gMg޺wF ~<|ẃB ^` M`EH?r]Y uZFyv@xxD=p:"EˆCBqJԒ;au&"$\'8}%zMg>&"y؃"( @ =1sa^et2}~qʗxyq99o7i(Mq.At?׋5̅-~F,M~zbX/߶Qb7= ԓhNXih2+D-41bv(Nw9LC.=_T NucX-²/ %<2QX0ZpH]04 YX#F(6)қ(O uaBQ|LJ$]Aaf!&isIPL& MNg(AS!O(#`B LȄv\$A#Ud `ivb +,5VT4Tn@Lj&V-,'Y 2I`1׃z4L,`a1EC5X ,0 )VrΨ",_>υ9H$!U(Pi *"e,ll`='O{O8<Є[b^׽ `zg7>15w0mËe |}W}2~1' sl7rݢ 5!DLiNیkB21KVp\͵UMdy%olؾ쑴3K:8 iـ4)'JI^P*JpF[e\VBUX %ˉ]2J2=+i3_}E44ҔܯLZoWY ^ӽq ~>/8A}oӕN742u1uK`HcDýBQU-_k)"UM@N*4GlnhC [ ?t{~u✽8kuWt~IkKQڜBHؼuFp B͙vmtp:cpf eJ6f^BG,aBveq \#L Yo/Nk7_,f|ڭU^v:Kx <U1-϶ 8BpxI58Y.#sV܄V[W 0!\ٯe\sXHg"Y$SºB.rW"QvrޚKYx^7-кkh0S2 iNyKC{ha(E7$Q=# 0~,1"-!aW%F'G27*j2<7%Wx|IET*ç֡5Vd7@ g,IB.c)aX7 O >Sr>jXiR¦ lT0$dmwn\yWNכ嫎fE$DDihP!XbaF4u6>/ˢ6BzmNY,D8D X!TSh zn't0*e#B^ʷ{*1jĝK㯨*tLH lr;N^Kay:mۓqQܢ,;g w$.beS1/H:~n88ӒV;l?ktilgg !t p2jP u x)w^;HǷCg-p2!õVBFpUKE 7*=49s'jOFzPLډvR cDp--L--  CX2)"ZfD YJcdl5g)`ڽ&P#7ô:|1Fn,Xs"d71m@hW͂7ay@YP?,W(ܕKRGtO0O`v~}v0q{`IR!QF(0Eq*-c&5XɈrE4 I2%NLys SiѬw9O98tY``7[PM.3L:\uڹړ}ɦ& rI;}~U{ wO`h8?g_>i8Y|.-5IddHbCf26Mz Km:C8f`#7h2~ō问3f׽˧hr9ǗW_:Y*|xwܞ*W%}Pxpb!閺>nM ȎŻTEr5ۿuυ鴗ըO{ EP*͎2_HTӄ[D0̭bZ6RebjyLdB i?M1Nd%Q8&rQI1L c"c 1 D'(,Ɖl%kg_VF-AlDY`Dalba ÈIPL\\>բ\Sg%#:5:⤌vST<O==B_)H -Hs nR OMVԀDTxb!XuM(gGv*/ȶ^K#*Gh0?C{L7[sX72ѢH 5{zXOzM&ptuyYy_='mF5EPܭC+" oluZB-$x3KeǬik}s~GʟXhZk|3b]d|_FZ$kUm$~hssq';]X^r]4-%Ҥchm{AۺuFp5e׶r Q }KPsZGk/a]3`^*`u Q=\rVDa֬hGr\rL`&4ph|*ߖ(̇#e ΒJ26U3+ 'Y'`yeU.t j?\J,P4Oc9rNup,T-m#MXo>j"6G-Ya}0$NNF`dNLm.G'$/lXߣNN=עʂ̃osvUߑtnFv}g:$^޸7_fs}%uriy:6afeswO{$!q=\?1)$.S# !#k|{[{I/im^ܕdRLa;h+ _dјqb_$wReBC.B!KZKi;#wTJcўdQhbI5<"-|A_aȚKv*VzA|uy.5kʏA(S*jfa6)6I(Ei`5q1O"jc{B-My.&6yB-5 l_>u:dWaWRMN?HR[ V!;>pyS:Hvf6g:<?{b+-0R7{!)N#$مngaE ԟlnV0Ɛ@߳DSU(}L.`ob޳n>iUv `jVw'`Ȥܚ_O\gt=_N~r3r6NN0]P9K?\:3]ӆQ :3;>C:;[|K_Oa~Qt{Z2eItdx]oh 6k5THa8aZce 2bIߜu^fTd1Y+vi  :C *E'D6Jhd5gB iESRPHۚ@y3`cŢ1Xi, juMhC.Ra1s'MB(mJ? }b,O I$Nj':_יa:3qn؃nhq(Q*k||4 Q?c; *W񍥥r;oǯF4SByO\29gZN<~ zn Bd|;'i ff<#yfltz 4ItٯgW_ýwǿx֟_&3ϻM_φ7pL+279?q>y?[;%?Jӿί'Ɨ4F- W~Np[P2ы'$% ǯRvw2d37w>&L7n^Jdt]p~GM&:Qg6^=XNO/!򿨣_\}5eX^:A퍛\IKc~5J:^iX^}{{2hnzX=.|~j&]'Lw`ߟQo@_(yxczd({ӫ _Fޏ s K]%x2>.:Z3Hc]~@?@| 4'43cx.g}2?\&k__/e߼ |R!?N2 p<7AÛbb茋̾9~{9f{ʜ_O%e\2,|2S$dF5".z~Ybu9.tbk*8:lxA,ϡuW5XċDjƋ+{ [IC.Xo‚cB؁;JHlL%R.2czLc㟺cr^I}&{Y MJܜh&{`1 TA?ZM4‚cU[n)KQ#=R~FQkHkMböX#En[=Dwp.bZKG%mi9r5UK u5"U?=^͎UD*!6hvU7V  UU?AL,QX@ G&L) '42OxEsI1 x |5rB?}qdI+t/xi$XԴ,V?J476\"͢04 !,p5)JlO ī3t?7)M[ #{n֤#X8u]gAԟvxQ0Aͫۖ6'zLfi͠qb%8}BkfejNmLjgZ֝DUO$֐Zۍज़*4֚֕T6NkGʽZk64SUִMi U !5j ZԮEY^(buWlQV3 0MɵVasdT!fJh#D }㿭u\1ƖwB<_D#{L+e ?|$ltuAx=z#s}k6{Gą1"JT8ViH#BBnBlyDBig*G+/LW'#bgK,)¾3AbYʓ],U [W{vv~vk6U v# 5g@UX~eC0\`,) ׌ӹsh?Ws%c`6(PDR0 7wJnk'7ؒ&AhtkjZkN1,ΡXTE!u<$KGc+RrLqJ$ƛ9<}'AyKAҷ~>Q+ȻxʊÝgajFYI91f57ah#!saA2K2G_S玷=zVt T'F[YgRL73uh^Lf]Q jilM {~J6Л {){]|?O`~}bSS`&9{6{ -oa]3AS/w޾y zD2e 2=~z Af`e}nꛇ L,ݹ:C> XqDR_q;twfȧjb\$2V1q9,¹  /okƩAya7^{磽k2"&Exu{ h ~#$6D-*A&CqulDV'B`᫂W]'Kb"6ã!Szaqܡ3lNa -IZyG}ϞeNQ۽c[reGa& I;hg4{!i %. T_2TߺX+Y1~,P~@=h%XÙpTa2Kd3}ɘk Vy.HT⬇]|yUYtwBH|Ak%#[xJWg#1)~IyD>~kXf$kkgS::aXGaEf`{M¦vӀ@s2 Q˷A1gBq,\, F1qmi S2Cfs5̮ Z⁳"@Oc ]_ >MqB dT`\zR,g ĿC2i/KTWhZ1q9iPf0Sumq(=i=f%5ٯƲ6`bKW>Md*&e,}-N$s.zKLa 941aC@CmBni {Qr$$gŔ{Gc bcJNҍnV1TVl 3j$DNY6ss+%J֟be4q/OXqg4_&;Qf/)wzÓ. |9\b!K(c\=dLƔEҰXS84NP D`&6Ni.uBAeQzXi)sXSiXjF{ 1B12Bc;FX\ 0JMFWtN1\{8אo MTߔ+1śS(GnJ.ќ]ΰ?)Q8) ddN)BU>I<%{QX>iP~xmvzcBEfncٱwȎ="CfncٱKe&Z،FNXGqzA N!hcY͎hvQ~"(/(J }@Dpȋ8(;T?+jF@bFHD!,+8£  Eȉ \¼}r L%H fw` 3YdTd1_`"TK `l'|Bh=fWKDȎ(553&b&pؠ0nR0}]"9˥?PF` 0e`4IY;KЩrĕ`Z9 ;Lh –;CF"ŰBP1ʶ(GNjt3٘}XOPt%pLu_i+ H胇AM (0OAhxP ̨9T(Er9CG8 b'%Q1PFaB1Ȩ#Gq#$d@ӀJK Le(\l:4aHQe((XڒSa)5 ps7"JNMdpfQ2QarG9sť0.KRRh媨:/H@&Z %c RD򱥕Ԕ"nNUYV Կa-T+C)r巨Y_QbnEP?!FS`FGC bQNP"R͊/F{g y7}+̒`n~"w# ) d,Hb"$ڸ/~w?+6LJ2ih@gy"p wjyfՐ8J e=RA 5IšKaR` CXg1 2$ޙ S_X^-":TP4K}G6 "`1r ~/!n"nD(GWDt4_0jXAHaTX8Lkw^' F̑Pv`e#)ޝDԗ5%(W~pvJE:eqx Q5EN [@ȸ(PHA62qL\T)&Rd,yHx -I!JQA@ E(.,{XwmqX4yC`yd6k,܉t=U$Vnvrss 4y,8°ƣuP)@3"ܪWs$MI3#V3S7L Ox+V"JXS; p~oiAbW|G^PyE;}Ng4r%ו4BCT+P#a찍5QS`#r`J^}Vʨ TEn3 +Z]^],3EH3O~Vx6 jbfv 3']=ۮrY-L>E]wo&{m'5GKQx,bZ bݥ^3$p',2V0dbb#td TٗSn^/\(ʧ}v&jc6 c{ʲG\XљgodR-ǟ5n3Fbݙz\ ?nv,ӥNZEWĮoQkp]_j{~x9ItӃX=77`TKzhZ<Ѭam3ppz Ab*^u{5d {fss0ϰeH{OP:W# ,;NYmb5̈́`ܮhCQ3D-Fҷ5֔XSo.!:[2i9ʑZ#3 2" ;Nx5,6z>+j&PܻZte5OJޤ~hwXs<5Uu{+X1-༔kOA /@ϯvg`doavka?U^S؁֏2'Oy}7 ^HϖOǧvϛ㻷o]B?.T|]աv0w{>Qܖ>'|K:,bM͗.߯S@& vw(;u(XkpXDM5B 3m@+10U,pͻݯ6K)`ԤpAi-F̶ќ) }v;^U"ZZa1<2ıJտE 8فV:N'/Y^jp7bS*w<$bfT )֊,? w71ŬZMWNПf.¹7$½4MDKT9% I@%|L/(kjEn26M JD4)PrNXN2a2*^eZ[e'"2i,-sn)B#eTLRdMCğ[ .o`M>KIH++syqOYFB-¤e@$pX6ÆdYi q% ֆ,.e.<'RoSeA#x8B_U U]JľJ(DuV28\e"x -nQn3P&Sdsmt S)9`S [lhyf#F8x[R٩>~J0[aĻEPcOi|cRXcga!W| _LJ[EA^)C r|mE>A&U[w ' pDlmz3`=jc-Eqe z.g _k_7<?Lϙ5j as9BV5mUBݶprr-vҐ4s],,\D]CHva￞msX `-20 6a3J{$K]>v=؎ J$RiI\(xc02 44ncoR瑊m#o]efX^CSKL:a$tī[Uy?~Soz6t. :ρ8$=w+m)/˩ν߬.6kQrs)3:m^# Wt Ā[*墠x\ct yߣw2>[,˚o|$RYm2ȪV_ Ry:^qۣ0=άFGu1JGьn|uVcF1#ӃQ$t^zhB`_,b#ѼwN~O9 nLW+7h1S!g^=1(Ax"M8Jk(TN8 )BJ]&mvʁ  E\вna_.7k ӻ` r !6SE)zxO?5`6SSʦzxaLm&THMj G1ҽ"Xf}A{G4BOpVc(汼=3[ܖgy[li&f`k8#w\hС{wӃa j†cT4ۧ_QN1tׇ8_<,֟%B*W{5J@t(:h*A_`Aڶ)^rm(}h?uFǛP5d4HPRӇ$1j##JzNHQ}gS)5)J3AXߋè(0[87[$76NxD!sUyJrU>E".k-B5l״|zK.TfqhZ]Y=*ڙ]98]̯h/ֳMs^x/"}lRv <ኳP#v~5Чm>oZ6dG4عX.Xq!+%QӣP}$>o*kMm]sDuA%)8n&X)zD"f^j[τceM9Iu?:A`UIɹǧrsQEw6y\P"mVmJ)R{шӽj`>@T^/n&]NP3L'1A < J6aV~KJB%cQҙB%UWDcI|H%-qU(EnO6/-v >ԩkQÕ 2u0 ri͚M3qњaKZjK K5D-Z4őD&MƑhh<+$h4  嬂#1Ŵ'JR%4 ;LeV NeV:L=LFpw=Jp nlCa<@(mRw*ATfC 5w|w+ x ~Mů\~;g5M(Ylc~T<ʷ0I؏1תJ^ڨOLPc)L`e6_0QSV [UҼ(7%z<r9 ҡ fHP^Zxr6&[87דgiC1K&THvdB_,ǗC+2QjH\ܱ^玣ʡH3LM,8oft~;|[:G^Nf2@oqާtSj`;|sFa'׭>t̼{5O"I39SnVL* %z4]ا+mse娿^3<=[ϰ="Ar^b^unH@&[b^#?_se"֑~cO0xvl5nx?W^twh2ߣtЪ9p;tx;ʾfWwK{XÒ6+F014fuo<0jneSV}GpsDJ0gaOGi1\jSGn:(Tn֯"4Y~a)mr`KDFY9ceO^XHcI;q_YklUW0K^o(!#2(C"'`QH`X^ .A8l$^p{"RYLdrL$ ;zixZ3' B֜LLV޼D if*u.07ҥQB3rDȣ!c6>Zg+AOkGu*V!?:аqk]jѓ(_ɁmPwP;|FlM8H H@ R[S D Z!Yʹ7ߵaL 5Q%OYӤA2$]ǐ"p0%*!7Ζ?MkКd*] >}IPWVH~CcdѾ+$QIJbmTkfbY! FA~U1x.yD"quN$ΠRVWHv}Q ɮ/ߘKŅyv2Uïϒ+\ .>^DtTRx妟N>|v |YG'WmO_V |]ЧYa+[m׵| <6\S׬ % 6Ubopl_w)kV0m૪bK!7PpE&Jh>+2nv%~1v/]lW"L}~œ>xF){Q5^( ‡2xȡzd3hV׼cYư*<[Ѓt(/RlWqUYQrҨ2[B ՛l.#Vh] AdwԂ"1ZʦKSGfY5&{^q Y'õ(b1DqĄ>`g"A3'LNh$#)Y $?"b<(mج񠮀rUs$B80IXx0AyNBxeL u}sE/TdדNhԢထ< Fa!NF1ews6fzvͷpJBQIM_/2;/;?7?Y<=7s7g ֒Q?\f;[7ϏLfK|M|yM_ &Ԑp;F?#Rד邇>ޜmNS>7`d_;gn2B}~ro..U{ӥyPcr8z[4>^& 2VޘJq)jg«l}Wx-CQ F%` rm#%RJZg%CdNB&9$UVٳEn/7ffA6v kĜȴQF8Rf4@P ID(ťں|V(P\3U{%ߺЂ|(\q׸$v¢Ҿ:h)/ZTH<,'S?tZ1%7|Č*;ђWu,wTWi~S޽3eqG,f1a@'zu-FFí#ܰ!:CF!s55&' FpfWa|ּJ<*?;Jcf@ Za(RVWX9D$ֱuk%BopLTm@%5H/QPLY@~ZSb_u3gsPKV/Nby68DkEI<5*BϪtZG͠TK(L &jeL81 e'  AHWq@$C@\0@"1M)0k,c.sN)3`8C ,[x/*c3Q:,$ϴ69B2"t DJH" (g Fyb4&lcsϗl@쪛oJ]})eZܬUެ]y;i%$=ʎ-9bGAd >\ȷ-^EH-/rx̌5Vq15Hzq9U(0I*$QMXkHF_ (pey(F"Ɣ.>IQxP1K$Hr0`&3JA%Ēaxi >G'<ᓴCyAHWlva#2 *Ft-%+7ߋyIʈV#9l:C朄ӞC+Tc,qϦt[mID KZρX K7)-c"C.0Xl$2jKZɴ[p9Iƴ$hIzx$Ydudl""H$Av㳛4O&7f//ߊן._Q%pztuktԱ'RGEGEGEGMod*dzT? OOi@\ ڬSB$]2_ yp؞_^\\5y&r_I*+fD)SB(Rq5y~$'F5̀~R~3i+L:cqhH}%n;[a#SŗݺgB¾: (TYB$҇ϰHZX0$EFӜrKf6A )>*%*.<$@GDb( -wo[Ϝ;ف5\Ir۸(9l(abYX3K0R(˜o#)}:F,HduPȘ)4y5-CkK\9 =Xw>lG&@f;Y6%ۙ خpC"8/ 8S$!1iHp 9l3 5-1rM }tu 5JM/ŧΦlen;,]tcZYĭ`Q*vBۉ._ٰ%!m^Iy d@;%Ae 0F*P.+EvNv/OXj b'ƫȰgzrhAg[xإcyRS _O~XB R[x229׋M}&7gӣE֛5%_}i )LG'6H~vR<IGZ!fw"\lwֺJvFaỶOl_^+͞;VLJtV_L5EĐvTΪatY_3S-\ƙ\@{r ϩg% }t4ti::cFsX2jp&˘vyͤьy/HVlQkX+Qi)FQZaC2 (d$V4iëf>k@xgW*wdH֌30ƍ[jD $z]\jV'A@4;=\lC(AL)k/*fי4 )JQxi@mи,*$ xǣ߶]~zwm\rPXJcBXH*p(PCG]JAVT%$5(a ȁCffgX`i #\x"D+u]HɆ qҍnZe YL$II8yߟ'6ďn& noOΌׯ/]>\i&Ms5<*EHf$y$\T!d$EW{r<4|m?dM T>l5 #ALJw*s#$0=+̏*ihlxi(.4Ѭ'{X8^aZ!J[+.xfs\(!7y*3FpfV`o%U *&ⴝ}yx'}w5(BJ3?^?&ˢ؁Fyq-#&Z) vL2&BA+!$W~\`DlƔ1;f(eN!E=,>ôDN$2HQ\xD~5cYe%5 )E, z]8_]V cG+72͛Mܷ+YdhLd׿![a I]#NݯcևK/_>fJvtydLJǂ+q+k LTgsD%plt_g*,vDc&)!C8*@wO;ܴU`Q[dfT-ü:5lPvQ:2=}dkYVZ5/tB- 6PId@)Kՠ#I*Qw/cQ:aQN r!);IRkc "\y(ҲY[]_:}FE?3yCr?FW)2JU a57y?yFIR#AW*SB ipĚ FJ`5tz5ovz4b`aO)7/ӟϯBͩkcD~=i]/C\^MAOR?!Ϸ}9NfT~nLo8-y|y+Ѡ]%v U2hWUV]qD"RA!Q'j=:eiF:BV05ȶiG' mQ8/th 5 㰶TBߣ][Z > \W!BD—]t}zKgB cdSE0dC x{vW1¾){ lq䱼}}[V&JnrX/~sꨱ ?j=<'!]4؎)r| cl" Ծ{*TcIR&!~BR!UWQCxg"{i4oW)A\k~ވϬ{pV;R2hi`)DQ@1!/ !aAkAd?6rasNk) 0AqnAqcq;`}cғqqίpAH)j.i"X#j5\(44&g.'Zָ>)ӛgn|PH?WCV0#WGάY)¿FGwQ0JNccrr@R %gr +]Z1ݛ,a R_]>%2EPF`^ Ud.K;SE*e6 t;SˋIYQyw D{y {?M>.e<@ICzfq:Z[+z}( fi1,a\^!g7AV 9a `Nf Kf}Y"Y A9ݱza漬,ر$r7uavJHyn{MLEƛ_~VkY2@p[}^Vܴifo&%}j})mbVy'׏Bfdwf@>t+kޫX[.FL6bǛw{fOe+?es1qXZgosW-^ֿj;aN G˴-?醳eH*xǏ!nhՐ:Vcd/W 2w ^%"=u^>7kt_嘤grqr\~*\ y1="i@7Y7 wo['x :R.2yն'3W0yytk]6IʞhZʪA%k_T} ̽b7U%4Mj'?F!SJE>O])ۤA޼Clvq)L]bQy12YTTE*6omUH- Y!1k9XTj=%LYLTVTUǎ|{eH}=uPkCUP]3THTr=*aZY-VepW6$:;j 0#zQ<$0\ P~C93='b^M3JZ1EgiQۘ؄dݳR~5,5'\5+o;Z0{GkӳW/+vvYC@:;? _uvQ۳PmKLΎgmYhg|ݠRզ5;Ku]Xx.!i2Y;M$K[s0^ f!K/^}˻+S*~?7'ܸ͐촗"7&Xj%) R0cI f0N!-F`RJ$'EמTgȉ8INb$ge-k#yoΒCgQy,Y-c;92AHL{2#'Pݖ&99E-R8KVh\^-{%/Jc̵9sH>ك`[كϒ[f)Nz5)j9!mSV:Gd)ȈRݕqtګc9n5^HHsPQݤ(Xgq2A8%}wq@oF>(h2ڄi}Q8DRy0|Y!Tpɒ%ejq`ш4RF(,?iyQxl4yQ0$k̫hxQXtwIp$j00wQkvJhf(_ jLoXۡZŃmN}{sz A?\9f"BW7N`Crl 44%N@!ߝ`@>~. ׏N"Eka 2Vp6 USCRRٰlE{HSx^jn Ѡ4XBtNx$]'U|Hu*S dMF5ubHҦ( >/Q-!) (*ȋiCҿ`Vd^!DJ'I!! >:ʺ8MC&L#4BXшwH2l6d?6j,$ËzՇ'4Dz M00O)VHB|nb 4a߽~.z<7uѦuTMNd &1xŬ#(j9b:R. FI=Yj~G9-_e>M?|@a2 |W}u=_|7,쫷fnH_:>,7!L,ZΠ :G1wDP!( jfĹYj&!_eFf:KUᗰnWfz ?'?}]-7U B FLx>)`c"b2j)\Kr-ghTV慳+sfojM}TF=BDNGHϱahkxtZОТhMp[+bnZHԓ 7g&8yi B{HHBf - Ø7VA"n8SI`ndRDQG:QKTyوYKWkvI/Nz.*q%F`i_' m^_̙DqgZ4F L֫c8$4@Q*%Nccrh\p,C>fr P@v0a,hi``x(\ P0ՍA:F.,sM$ic ػ޶.W}5_bSH 603Øo$Q('gH٦/iem9<3 J:68^@0HA`̑ŝA7XZ/{ESJ4ئI"R8h¥0K 6)"It ˔Iy5Po5<;S}J !Tp",rNxN"a0Q^&`'<0m8h3`5'c%x.S0y7g)<!Ȣ4Q I q.RhN90@rS 8 zXIV'Z1NJ8hja[*tcr5kM1¯Q'SnMKӷRL)w6Јlw@o}oMG=fWY/@eC8g&!'>*O<~^E xŖ l9p @ skziX`:6y pM@ @ML V%ڷ$fz},zᱶ &I{00`[/&c]Ւv}')J'Vԥ&(HsK4u2$!'m˾7&ɋ77L*/ƈ]5Vwc$ѮbU^ vvkXӝŘl{böCbb;ˋ1;ˋ`;ˋFCxbІyƻˋ7,/&7o'/6)h Ä$2(vpZu\APB(.D[l],/&zw׋!b,/xg(bgy1`gy1xgy1rgy1elGy5%[_j̓A\c"OwÿaWAקΖ?XtkjY1ua!;nlJG?׾$n0 Fb:m۸Ot tzGѭ}2uօ츉)nG7wk*ӶDͭ `ѭ}Ċѭ qmeS?M![[ V1m'm.tJLGѭ}AԊѭ qmeS?߭XwkJen,o :l jfںqՙi&ʦZ p;1Db:m۸Ot\}G)i[&ʦ5C]Cnm1Xtڶq趹:Spѭ[&ƦގnBbm>mDKp3Vn]XȎh+bvtӍWn0`i})B ݺ7V6%ȶmWچ׵u)m+HͥkN=A mAjY_/v JnvE k+B8߶"Ԃ E)"k|m{~ HM}}.M1mb}-H_ ҩ'}ϙ =^n`}|&šJs0޾39>ܩ'ؚ3ӲVcVۗc19n=7_-59fιs}SO>{q9f.s}SOP> a9f$s}KOH1 Dcs̝zjrB>`f1DYH/,9N=A"}Ǣ<`Uvcމ$o_Y>;N嘥Ƽ19N=A-ǬHij}1w lcY1scغuG1+>Ǽ9f%c>瘻1kscۗcL>;嘵19N=A 1kHcs jERHxW M"*;zs6|}ߍ"A" u0OinVQ<'qn6W/hdc貎"&Nb1^ frc;Д&ihP2 ,S40BkԌ44Q '^i'@"(C1!#/9ttBINkAP;'76@B7Z,IMRyA`أCL v(CLʖJ* S@©2 N& !)@) WƊ% &@mYPhhBc^XR b0Uz10.w\P"RcRzކqkg6XVRJ<33|}`x=|y5eco_̢Thya䲢]+'1>'\B‰LS820j$dLAwaK~LO'!ʐdXz#4tE>6]no<)8IR<&5Lgps 2uy-ܤ*<4ћ^pV@  of>,5m#8ErKtך="bv<6ڳ0i0bЗIx=2\/ܢ0eO?E>óD O6_^vD/.4Wwz fK3 ""F8n`;3zH1/j1M9W"F՟of^^pU/q{N -ٖJaR@W&DC@3'Sړs]TR!`뛑@X"N8uN c=*\Mz,6XYWU>K(oN=cOC I>]5V;*𫙱I֯_ _{ OFayuQ(_Ey0oCL>HI'ތ˓\23J;_QcN-/?GO*>y';;h2~%WU.Y}s!3\iaP#f*az w^ aRR/|f +!8GT&= +EpS' g(,nfY%kVT|lܑSz8F%Dr 5gq&Ib&(`Tatr2 ޯ%Y#-l\Ao^L]`葅f~יQ#M(;))iu?ߟ7K[~? S</~O'Nl )+ a|w=a] 2"pql ĶWGW>G]},= "OeМZqzQ kowau~dV.+Ԉp1JndlpbmO7$wk6h g%ݏAU>NO ʕVǪo+bItoWl5pC0L$4[M_ic'TV\e 1m>cncuPz=3"&=kIpZTjuRU^vz4d4s,/ ykocc]F{}}H RLK>h;F0Hƛum= Y~,b!z^ /'fj>,Wg?۲(G(ܮ< }ݟ3!ԜW3g8pt l1KdqTD#Z*'g#F ޢ_BoT/(TW5!atU&޶Q.gݨNe{yo֦gukۼv5 C^J]t2lVkchJF(%ʑkRXs&4d͚M%-`})heb#. =~O\B}»b 7絵|;@ )az }ti;jf<{Ly2Z"fT˯|f>TSw[ ٩iXuA]r+AbqMP n4We[_t!(:,̎룷u~Ak?W0jW9́F?/r}@*+\q$(Msף_<Na]lr}[Lq(>"r} ©gpټOeSGsk5fU"_̪E[):LyS&'3.l͞s_ޔV_NaX}OTT+ޡZ)wm͍l(ҸMm@" 9`; q  +sh LShZ! \ 8vR"D^yyUrUl jrPf 㚥*Mц+K61JSRCl}q5j:,)CJ! Tخ!7.H!pp&M?F4RoߞLԂOt$+hu!4OXPFu4N8E)f&H+TľhbE@y" U=cp^3P!Y|"ZOU"gae_U)7ཪFe?^uw@iV]م}Y|-wyEAM^Ϟ3m׋}wւVu?EQ :6%*ۇ>P~̇ !jwvfGlS#8oTbc47ZVLRfrWB| %e~46]XP %-KZc h(iYI˧: Ni![rT= ś!9jH\`HzfQ,(CBp1\ p7\ npw6|Y1\xT bf?fV89 vL&TIp '{ַo]&3hAVEn1j0*53ө?Ey#I:Flx6;r\oM8-I#_rB ߋz\Ҝ۳w@B%tNKipTؑq<3m]Q] ,|E,#KF\LWIPak@Nsi%K4k'}:T+ߦ$N#$,:;8@O[-v-1p"Re y {'_NW:N=%М/7gВ7\>t XTJQgL)2i}$rFem|ymNֆ:6yX|)DiØs4o&V^;0}xw}coG:w<}pO6rs@jClӽC]q]D9f زj'ٗ㕍c+q]Xu{3'ZG`91"A6.M]4Q=N#Yct՜~'ugsnڜ%VWB8?^,jxx1^5]ɕ@|h5.?)#LR ͘+̉I*fX"Ә@D21Md/y@{5|Va NPO:4<*cO߿Y%P#]}}I._/3 xTIsrLQ|l藟_G|QC$o|AD"9B<$ L`k_0t|,gFW̯I&EHEo# 6HPy=2 A&{YP,ᶠ+2"O}#s%!deyyڌIE7.vayvq?-(U{$:qp*4KDxte!#>e!,D$J5uFRcyoK\A!w9tNtH7 |*@|a);U%|qDO8͌ 8oO2&c6OӾQu)gByqz$PL`LӜ|H>`bKypf 94ejz΅v 5&c1T14RY=JZE8 ”M1H"~ĹHD3ʙd٘ĢXjbd2J (8XhdCjҧC75WXw90c3W 7\Q0gl˜s*E$:ʅbΆW'&ġ7M`Ld}rtU\Drn{#R9gL|^!/oXX*D9-W-"d'Nldוkɞ`svuݶBI;pXWMlV1-<O23]HMOɣS OFO!afy z|^k%s.mk@QNFKuV";7Vk2c{55LB0:V:ms}W:mM08tڶ314$={9h}]՟i,S :9SH8ZRKղv\j,]Q7?Q wYF0Iq>}o>ut9_~p4γ/NgK%d?|L%9s^u}gH3&26'oI6HNA5A4v;w i|[EL87s[SN1hc"[㉆j6$䙋hE3F)U'vSUyQ}*m] pzm̓ {$q԰* v-q l_2w^^Mm hv>ٯ_5\g?H/+b(l&mQW7vqm6Ln-1UJ/ Y[zoiaObVڠj 'ԟ&$},HM 6j/سQ:r'kmIi0 1#==[YNs wo׆,K:7>Y~6IV>s#z g!9Ӡ.|,ԝxw . @Pّx/o" {uW8ZĒNo.bHhxXH!Dc0=b-]> eg Q^߆COjHP#E1.bNbG;ƨBޖ .](=Zt}_߇Kўfi[qvuѹ@GG'!>YΝWFh-j魱E!]ʗs' ,¦oe3#LRZU!/J dO9.*LFYVg|^xUnn))o[ Na~owot|1nLFyMףR͵u׮9[]!w/O$ ˆ:Ӑaxc1xA툜BM4 qF" f_,ιAD?kp2EƼPN>)pN ;PJ(D&aF4f(H&"ĘEPxmRb`IUvFzWZ?DdH?ca1)k?,ZB56Cdz.g|T_~N?~ w1%zhDⳇD#oSno?o[mk$v?,Qc@ f-@Dgn4$*Um:հO AR)e|5CK9jg 6 TQmm4C.%t$޺s.;bSB/5 DIyk1oN!,B@L6 X^($;M/+ULrgkr.QW\UW,x?/ho?.|h-ܬd>uկn梧vu֮ZkʯZ+^`gnIjW!Qf&!12*Bd-64qߥ V2 *T 3 yGMvب@/FEgfpr޷ WE{[:{4pvcbqUAVJ:؂t{=+^=ݲ i3"S̴M_PC8&1QVPp$Tā1#@`An:P#An:xE:H vjﰐYk E!ilY׽#wwx7Ei6`Ltxu0a *M,!!5FpmuKH88J0n cm>(ݱos!:89!ZSH$(tҘ%G:ܦHcA`abqNw!EB8)copmtqԉe5͵RNtIbSchI yKz_`PWz?{Fn /R_\'[:صM^6kQTxIƐP3CJkkmCL+F7lkSHgB%2~zEs ,NDPg`NAPnqY,wzZZEv(}\ WOt)|OG9ew? ??~>+x;'Hߝ+.?^KC77S<(!30ͮ3YVVbSyy)"/5]X5&&`ə\HaC@_ f`DPEîrY08\b`,^si9 ͼ۶d5HUֱ~;xq;,'/L~u&q}in6 Q-8| Xj Vf`Ng`;ϷvX+yO^qu;tp4UF8})l6Xn&W!`j0sF ,SU WV7 _u~v[եD@ B>>{D)vv6$-w~յxy/UzשMwtI/(/ M˥ hd2bG&D8$?"xS0R}6R IBsF֐AzytugRBiCrxt |we`h#>F s Bj,#+;Ks<18R V5 qʻ Qmk»ak*ߒ0Yb}fm$Y&2#s8G19G $˼713"&$@`2E!EL6zH|ׅ8w?]ԿЙk'.>1yv(Rr0!^L.^*F . 0昗: @ A !|PDdJr a  )x0.wc[pN<;.W,@] !dU@Rb8x{!G1 {qBMBj<ű{%x+e`1SV'ܜgRf95qk'-/`*tDn_s>!̦z݃f] :u{PH z9,m6NCձ|؋ +2_oT;wŇn×_) )YW|;Iv:Lޜ%/ޗPঢWh/߂1YL!jZ23HK95ȏz03"ZdRY p /LMVg6 ~1;?yjĪ%wZzޏ.zP-c-G,niD5j[M흜~s.VGPhAWP Gӿ ϰuMSmħ95zBw;qE1w>B``9[4pN0U@L5$x ] LJoNF0%4_CU*80e-ҰO>6 S&prkt8BhZm/G!.a('U:j`-}H8(-t2v{~Arx;,{dq Z} m~KÀ-ÃpNP"KÄބϊ_AH4U e.rU]wGܨ>޲ dr䰔`XWڻe {07kiz>/\$0{Wp9e|׮ïLltu;.N6mo-9IL))O%ؚrkĒR#ISx2W(g\ydJ1NRbIҤ\/8vQS{ f@"\UZgz]3|@FV$ VA):u_͓K~tH Hbu3I2uPYQE r!e. Tμ#!*%90A 9U]t[cS~N$c-i~HoՙK-e$r'[0h_#=D2 V`C:uIMt;3xC0aNܟ2g ^yϕ347S6QS W"HAWo{Š'-=JQas#gL* d^E) 060T 2!iE3V?cԃ@&HpaV/iIq}V9hΈVF[;1uE>ڷw4+2Xb89~(fPEjv}i(&nO`Y"66wqpWfwVfS5*E' 9G[:~5yF.dkhn_ i؊>ep9jz̧0pĸbo5⢏8o~ Bm/LBowV v0V YpO/|`_+NƕiwaLcؖOɷ';g6 G"ǧf3`;Xh|1uT<%3RjJ|snVH]jGS5*QMH5D[z,P otKV`18>Uh.q9Wf;3-]勺˾Zt s;|5?Aw"F=w{%M"g%1 ie u&V+LJofBOKԪ#DuGj&\ԙbJ++$4cd΄e̒W#2rF2*$syC\5*=dk'R0={].ڽ1cZ5RP5ɔP oZCZ.JPc;6|OmZR&Ǹ~"0  3Q2[In?ovy؋-bfM?IA" /It~M+/Ռ;i;Ŀ>,v^HĂ҂0t\OpYN'gq,Mf"PjN9HvvRJD"|8z~s{p NϼRـcA0i>Au89Wd;a,DǼgoh.)iQt|_G4` R;XXL 5̜_'}Ó IZk CQWO_73BIW;d cjASҸڥ$0RHsC4yB"=^)R͔㞐PtHap&2B4RpAYFr34cG2k* `RSbwl5 S!í,YG&ϵr;B8#5 {g Qyak8E opi }cVOb ,|O:Ð _wyA>ZY1۩/>e4@l"b|z6e:aQtrNpbD!#>ewcfO $\oBȐ pgZcMVP¼ xt; _6(z_c%;A7q EAU3yH&̾~|vt`u*:V3'cbm& 歬Rj~H;韕Γhf BǢ<gדɧ~B=Wb %7 W|N0A9 TnS⋭d2Ot0M7kmF/5|? ̗K&B7ɶ>I$_%&d c=dUXU+gqEe*D '%Ux2p bj1SbjսJݥKo  'ԅmrcNE9N:Uz=.X)g$ql:^˗gu3ZY;v- %}ݒtH!FBt*єu1JQ굀1x%/zO<0aBA^碎I+^D#"㫩9SXwfmadas9UկF1kJLaUUhkB bL)v̻va=ECϦ ,Tig2􋘀=FJYnQSxڙKsh"6}/殛(uL 1< dV|W-n"~ E/~T5m ʆ1?a&o2=n|p{]A +)3WJ3d-sĔ)K^t`HEDOR:ɯdž-)-+Z]g>jnue^&0~1" r H@vWXUi4d4 a%R2c JߜXF>SR*!cXx CJQej0 CZ*[QVi dE5EӒ`b`5{%, hְmX}x( >Ͷf=pG2t^Wau| Lb)p@SxȨrcbG|ƥ2XZB\nkˑFBp$}%'4}D`h&d fZa6DJ@Y vO{3OXT}״njos0>lr.Εfm^}S_K?s%8Ng_7x)~gxM>u_#E I#[E4GHX=Mcn4gh"8WX{.Ej6$䍋20AqeXL7Y\&h+i((;Čx LE2tn" c|GsI6ؕ<&\|p7Bb*=Hww< k" A}ӡ 휚8{]T|ffZo0Jy\OaT1W vĿRzr@ `F̓|ty#xtzsu1 Zzpje$%/a K;<g1EC2Ep(Z3-ybLQ]\ccd%HxYn?a֧+|A{hsZ% ^;I?s*n[1.j%1CXG>DfgI3H o?$H|Nv<QyLg(f#ΜZ:7( l8)mh,QL;Ƭ<[(9G&25d؟| ־P+E12vg_,0ʑb֕e]*jCk Ք)v*v1#MN)ElbCBIO^%upDIL, \f{r\"C+RScܨWx/ \ȨJXTT t)Q%(*|aK,4ќRh^L#ϽM)ƨCB{Wz %lMBT"-uR+Ij*(0eIytYpQLXn)Td\q~X*]q)"Ra>Ƙ}z&C0*<7ҀP2 "Ɩ5#6*4RJA$-!Aqty؇AE#%Ysϸ JĽ!v& 5jEu8RM[ܱ0q`Q5ãMAE6ʻQQ.LӾKOd^9.fb!3P!#iK=v[=@I.CyCƃECԇ{EM& @QhP{iQpD}/?.vRq?%d}DHLuJ[GaI$$"IE c T]O¥z2 <}{L * !9Tm&a!c&#M޳P82]Eꦠb^&Dv&U;VEacJcɥA F3R@aga+AaHP6$䍋hL1rvzC^Ǡri#:Hn"X"[E4K{Mn4ghC"nڭ y"L1ۅu~+f\rⱏ} '=MҪ$Ӛ,ԱדIIM mjFό#,m=) aV=/a K0C9dd%laIv;4Ks%3%%x^;"2hfI#c\;pxP?S/g ǯx3Ώ9w/xb9K[n {+L}UܒɧiKWt5m:_ɘP;<"FhK!4A"Li^HZ;2.v#^~D&4ALa.Mngzլɞ7/˯<{\콻7W$R_qR#  ᠩ O JsX8GH5RPa`T,}_G ˠU~>6Iqsх3oD?[8+$ܞCOk^ts(\jȞ.=)C =|[J 4H1 *#n)oi C J~/x3眡LLכܤ,jj;E4ѭ}L^˞L0^h"E]efh`+-iMySSQ4]L^-c,^~jKE❫mcImn`Q{&mD1l6ql!:}V<:R2: X/i=\h 㒷}p>o5Ht}rh%0D3xJtNg\{ W]j.<AL;.#t*LqѷLY7TBr#F"<^ҠAE|( Gp}@)_(㧯1TI;9m:DsTZDADfTycSE8K93 h(_p:1ϢzUc\j# F^BmLn \"L;U UR#8ZqX]ӚZ bK^!-QC+O~5?6ԿD^ Uӹq AETZLZt]76{ 2D]O ۇr! }8?@cs/sF40}mxrV;\pUJs^\hb++ 3p0"`̠璶Lg/rV(uA\ˢPw$u/P3 {(e]c]J-@?hQJRsb`5VUu\*5E<~M`3m=m';@ŻTQ&ѸTK&<G#QХ"NPT9SU hZipj#JW1)>?-:?V͉6NӃe<|Y/4oH=|vEi37@H|v0/6ALBx 6_q9}ws{3݂^?< !"/yAH~z5>AePA)~]x!:l"}:,)GXQB޵6r#b`mx'AfN {3%+sE[ݺYlEI6UXU*OT"ZԿ.-ErЅEg8kD+Zn(FwPv,5T!iaAݪj\75*iD mjFuQ"F: &}%C vgXeM"x ۊŁRw$}xy~r1Tc^*iXFTɨ m-; L:{N&WU)n'ȉ%g·`*/h%J(Rk;TƮ ܰ_|p=xq7*$YgT!^Eeizܢ@y 3Z iMČM$:Zԙ>8Ϥayѱ S¥eE[b3 -K〜4Mи QpJP&DjP&Q2 -13B[Dp36(oTbA\iZ*\JrեbD7>C̴-rܴAC09aDZ4}3A >Θ gq "2Fed6  tL%WPpWb8_& FKEFpWs0Dq-F&%Hvr͍ѽTrTDD- cz Yg ~ƥ +N|Դf,j|WsfNxw۠ ')9SfʚqMޡ})~z{D<g+곜}Zh4_Fj!]f%@G 2!0[f~~qs>&2uC(چ !T|Who ҚP8;r]mM\}Uq? ]:u"zGd\]XvFo0Rx]>Xs^VKR3IA:hxt45ěx ӑ4a.j G3”3#i'^Lـ]_itnST2VXGѿ/$qSZj '*AK7Hcm;„ҾG }o´O:j0z|5p2ڕԓh|xvo*nۨ(BUWNb,TP[x^²fP '#Eq?0 =5/ %O`LVdd(W]*4.uLJm"1+4!hZV}l+6C1Z} $e\y۳SBn7n]B aTiƨnv4 m:7.RH,ŽMq$-fGrDEZ 6 &ajF!,2IAj"peDEb@ k,wnk .PU;ݾS󪋯ߎo@ .KT:[>=3GHC%5UrcDDo0]7?o>g/dɛ%#rLeB(ё¢BshXԖQ*Ѡ'>FЛ LCq7k9{ I<>YX8ɠ ȽSDqawl6p Cq1Nz&<0"P#Wfv%FC!ɼVv>>i\6F[2Zst}G[" PڂNJ*1X'MX ywI]$}m}% _{7艵U"?5w_g|ʥ2Pĉ}]w}}GjX7uO' ?ZӥCPkd8/6ΥT?QfH.wL$\F@ qQ񠬕1I-#`@PoT'(cF$v'\6@Ғ-[W"%⿷YKbqll(381rI;#~+ePQ)Q, pm7SVU QCedA0ejrfAIEKx$;HW HfHֈZfC'FQQ3*re_km14 \$<]h-!RO<2C#ɃZ#$?FpQCI"=06}{IT@Wjl%# 16s =$լ7@ 7?AP =l[}j;_/,esû}eMmPp}]xzpnFGM8JS[vId`:3 j5#GWe{hH~f[lnF tbn]&XCl|~fl-ך61oYM%HR0D[ZUj~~o6U< ыdҊVpX ~-2{,ooaAmбXPíWGU-PۺnHπH8fC p:]=Efӣ(ӻ.z:8tHnPk() nXXn]Ҁ\1x䢂}­93h~{A O +oB7榢yO\왋^,i!舃=xcl@FZ>c5v#YW<35֣1`ddШ,V4Rkq1LsjB+lx6w tɍ",qγ%Y*q^}cun9m.pn.xzpNnFsT9zFO ξ?ƶœL3|xdIBPo+/%wmq~.6txg}ɮ >BljF=KFj]f/C-7`K}T*EӵX1g=Z+ö3~ry8=SMsos8}yKSR֪ k_I/<1h/cE1]F䇛XRn)0Xm=tcc'zF󡎺!E`mm:Dey*k4J&ʚp!TRDFVg^bؔoW櫥S jp3$hgpe`=zVX|cy#x!@7Zntt8d4 NfB)`8 R% wyH )Y :І#S!pmv}sy3[X7VFClnRn%aFsNd[҃,dm=9.s$֐@Rb;eE*AcL/QԶm4'k:yT3y~ppqj721y6}ṼЭMG梆MKki9{4;g3](J@-IHdp(i8G#\`)8k3_eDc,hٗ;#q,m84+?:9@{mg>]u9BW/Wc8 xߩ| #жm*e"4p>QFݙUU͓M}]`hdvBʄ*:eͅJ۾;a= D,#ٷRb:**Vfkl䤵 {-kHx$-NMCemn2/ Dd[ؐ<("3!ZdНdVLC%20-'`1rTMB%[wYO{rAq0 iG`@ԁY-54klxb?vKfU w~bhkOyحga|o;2n86bHfNtԚ;uu#mg#7eeuD7VkE} ]ew IpCfZ(Sey@JqG5O،(wϻn4NtVWKA7MQZ*]K|l^lmDe y-溬/epJ(gJ(7,#&Cr؝N:e_UI-@b/0]Կ]>-v4;t5"\X'emy6=&nٷPy;ɕ:|M=rMj]$GҖKy75OeН52c YRe|JLD Ht:3S+^3){3{i22 SLu>xU^DmV89pG*eXp{s 36{9s52DrYmRjϕFi45]2e .K?~z :<>~K5R5S KO8hf=eb\'W`xbӐl*l$wx=g>bv;Kk31[ٷ 9*Y*"̑,U)4|#٫ VcxLDfd!(e~ CRUae|KB!BgXUVNM 66K Tszev+q;<;X~5OJcBӎ@Nŋ)#=#%-YyS!)ɳTpЪ=$eFqrr:uVs?Mm}d;}_yleV.vw0Iٗ1Z7͛1uvHڞJoi7n2γts!uhR7h>;Ojz|n;:8v\(x2f/QiW#Rp'˙al@JiEPoJtS 4jj9~B=irgۢGࢢXc^*CMyD;ؖ5gʨEn)ddٌ,U:픲xz`OUQd{Sm-skwwC;v*EcOj ;.͏;,ĕxO\:_X$y\=¼b9~@|HCț閯x\ #8-5Lh`[ȑ(.t3ɿ>V؅,}K穕@e꟟Ϯ{! 8f(i*zX]]P!lou`Ԗ.{ d$'@aY E#` fI&QQVVcT' BQm~᤾ (wwusq ú+1 TsƢZU=\T; \cT"H)V\*lEYRYj/RIdz+:\Y b}=?[Tzwanlrܭ}9$u=E"&/)^lzϸljO֦_E|/~K/?`e +Ts:yZT*Dsum26t ΢q @{^c FW *oN֠˚'iOFN.wDs_i3'z52+4kfukjv2,Rsۦ(5:ԏ&Yϣg_Z77䭠0B{(nRfJvvq'?1kղVUJucJ +i͏:J 7K&H]}]4,ˢ88Y w.)*Oթw1 C2u;˂Z)8wkc-9N@_ykM#JB@92vQYYrWrN 5 iڎH&*4N8bΫ'-S?iO+ Utm1wd&\"ySX[g+xdoDF  eVJAHѹb>vQb>vbeLm:,c#1lzi;dhhGLGUkRrԩvNt*x2::TC_)sZZDk0D#Ĕ}\ꮪ##,H#w+K%8R$ v hC2p:5w=m$ ]1lew6 \`wuP?OMl`â|[=~`׌"{<=]U]ztLyPTMR9rATzTiTJ @ @@0WW|~*NҥlH^ehLGŬO 4_jXVqaOH )5f^^Z[oi&bRBi^`ZH @):fꁒ N:'bd<;>b}B.K؊.1 JsXa(\"{h_XRc“LcHdRfE`UGJ DI4ϐ9^sgw{懭5hvw׃3ѴCG;o4~~ci?|=orH]uvۻVʝ7֮8Bkv|׶kp]wR$o_1tgz@wwuCl;y6Dn W駁e57&^M~o1i_6za#}Ϝ}nxaz?Mg P͟6랚.S @j]qϣE};f{osYkL~JΡٷ{ƫ-z cոBk9~x56gFz.omi&|f 9;ew:io ^!ckփV/u5Y{׊]vȿO<?.`ǃ-j l] u7ۙ Wo̩6 ׫oW[O=?aw^u8/5ߵ:vOsS~i 8ۇngL҅e, DVkAL{? o/i~;<lmݿouݓS/3uzY7yPw;n$=C./}$ [kiCWsPO2~Q [|IO2LZT|t"[+Gclߵ;X PB5B6M- ְGqYRe(8"v*P%v0gTZRq7pCڤou(ATn`xXi·!Vc|Kq='×﹙b>` = >Bu0Z, Q~ l2HU kkGQqm[N Ҩ+GC iAg#C`Q(RP/ ʺ5l ek(8([>V,AM1))E2(L07 r|am356xV&ၐ[2.NRIu>;,8G  +;c Yhe,FPҸ'sTc hI!''.*9A8>ZrʐhI,N R Yc ͖2MO0@e&ACdFa) N.A s#M^`#1Z%W*"rAt]jh fL,EΕ:&xf)3K YJ&< +DV_ΗCsOϮ;l@2t*ܦ-Y2vݨ!lRT#\Khî)!JeAX9G |M.W>t&IrCFg GYx! "9FAP |*)`N\\^ u40q2Zg.7<2yE.9IAQ4H֤@xb42A 2UjSK2qQjS\mMq)6-7Gԭ`eR>3 XJjt1'=-nVp$_Hy0  [ r8ȬIJ\ۃ_=Ϳuڵ=\S˞s褧4 H"T(sD9,G'R5׏a bs 66Qǫ̈̄F<;zsYgN2|S%Y>3Xz8MTgj+a>0_NƶZWz7 OZSSWZ XKQVkjcfE3j有G8!38 @Bȴ3&bDe'(`퍼N=pt֜ygS_ab#FfoM83 h ^uqIXaLsa]URͲO.B1+<)Zh'!E`~CM$TSRt݌ɘW"@S")hUvZ]SmXnQ}ЗXj5'ay eVSQ .R0YAL&U͛&Uj5ܞ6g ޓ繕zssSi$XM1^s@ V́^n[ZT1hvIm b:md.vMRE,(Ge\%|iy-Mǒ ǒ $y s({Ԙ%qyi|Sԛq-0cѥ tϫVmRr\ Z*6o, +tjP7WZSg@HwsLk, S jk<dž.P5/0`lmfj ֪}#>K)΢6T>ej.SPYn\e c& (aq Z5Em\ EDM8THDrcfY*s1!7( Sp]l. FT=+B)|J')>Jc6CXq{s ZТEV^A)S4ŝ8ʪ׽haVD>퇘$'g08 lxWcx9:Q9om7Nl +TT!t9 ܬk r!i!O&Xőɳ1n*RTe#;Jw^$ļVn]wp6'YƱc6]N%T%}flN)&r"(˂8WGvDyܣ}(:!{B4j:ɝ*$nŅ鞸(An#>HthQI @eQCfҨȴPbW ` `S `6m<'3:ؠ"YͯɒWRX["EH"Wy̢3kOCI 0@P'quZiL8@)iiLT tب4O.M.[ SDXfeb $ jbX>ecƪUU_y3/~RJcsX7NMwbWAܖL`\v SRdXD6WZ ί={R}jBjn%{n'3ݖM=8=}Y-5}h 1Pc@:Р\gCatNӄfvN6:Q}M(zdyܣ{GW~Ck( zzͭf1g@g}ل($(hXf<+ E77oM7o,;eq KE*׳;>˂E Y'C:dwm2SNs[ W\$j؞ƻ$ﲼBcU[FzP\4 ݽU߫3z>$V9C|793MJ3$;FnxzIZe5PBR F 5kL9]HܽXwrD30f"&>E~w"2-~?:zN(mjze!95yV%ON;v.v'6nu7޶knu4#i4;CN."}qu|6:YϮ_\V? ַ nPDmeN٥ K 56&٨B!c3oZx\U4Zj'uxjjj˫=N`)60*,@Bq1pdQt0Ƴ&  籋ڀ Q>)G:+K}]g!+R{% Б ʒX*/<}jcL'C{ǒ}޼waϰFZ79._kYiK5"d-s4V헔ct}۳C g=S3f>ֹS>gֻAz+hPRvKq b5n0F颌lAj`x4iOs:u4i<֙(3Y"bb/fh(%goXL˥۴=h,YAF{Wm젠+X=)Vg#v:䚾0S(<AKt9fNzU91"tnRM_]xZ*)WZ*Z*Zbi2s%b``LB~@S”uGhJAj dg>Oг@׬@^o'Yw^]pp:IRaLVE*k]9HJX%<^b̆:x, f̞&: f6m0`vy0;Pf]ct6#UD̡-Z"7R a19Dy"l֌QsȽwJ2$nV+hoN=,!3'=  RC)<M(R!,>&܊J"`^혵78l0`l\E~DJLަixlqZ@Q8 (cUv$k5> S[=g'ɡ-.uO Ws*TRD})Ҩϒl+[;Yu)qb݊>M|ćn4itfyt3OfnK] #̀MctN X-UeǹV=?{{R\Cu`U0<9L@'@ &,>yH.l&8Θc@H@LMgEhlf؆pڌ6m4Y"L{6K_g ]?eZM6& qx3YV}x vP{BՓ0!."+٫ң:#*SV3lHᆲU=Zsu;QrE-rtݳA)WcyRu_%Ovjັ%I ^<2^eNƐ$R(\N-.Wt#ݝwGrz=ԵD%S[2uxZ2%S[2%S[2uy) 2Թt pR"aá;N5h I3`4ҩxE'F9,iޓ"POhJ-^2rBxJ1V,bPq"{`He#f7D_a_q\(dͬYU44'ιa\9s4Y 2sfu+Dq:<4fbKҋXs4%2sEF{(SN[r7z_TbL2c*9a ;DP"RKP4i|W-5i|Mw2s|(>^r=ϲ xEթ$hV;؃ɚs>jғVJhb_O*n{fHjɚdA[ |Ex AfF^,^lpP4iTWUQMF5j,jى:FE9U3!joN "RBJ8fh@oklh|)k7[rٚutz\G7PJy@\gPrU_zo?߿퇃wv~ǣynX.+*ȰIF|?l8ǂPʟ}}6G H:Y4z>z$1DrL&=g (mz!_(\L0ܝq:B!B+fNKf+>?R(#aH7`gY/:yҫx~詯߾䣉˫~^_߾DX/w~|~_~'9^~u :׫~vHÇ_y3/43#w#Mg AQtsz9UK4BI,Ut"X_Ȍm22 x-%?oC ?G9$xPt֌{@ ޾?ij yq^dEVjD1RD`*BG6(Kb!g}t3WT9-F/G.Ժ#zq_1ԁx(LX`,<5vb;$ߗ:v'vv]\5;;.RȏE)6TH{n9E.IC$.<A±~j*>zhc&Վ=h^jyqh4=J{X])qd& QѦcռGXUodnJ`Jg) yH1RK[xrUlȏec\!%utB2wpA~Ѝvivɴۀ?v?^]/y}'CMf Fv&-yv[I>jMF^,UXL$p!ab͡T2]ծgH2}|>Oo|yuChKس>J탴O/ c(=$abG# "zşF-`~w3! yOO?yBH~g_\,%sf>ςF<}i^`3]آ 7KtFT#=}Kz[}|@c4I`)@*krz Kw#(/Ƚ b<faik`I Y ; j!=?982v2M63 bF3c &@[&wg Q ѐēD۟\!8lw`ޢ 8"Wۛ߼2NmbxgKxq?_t" 'J)FdW]Ӝz&5Sw_)GBp\zE)`#)X Im!@搭S͏_pT~W9܃M DJ Ok2fۢFd!SI-޸XXjw)dcckb'c<1xw336h{T!X&))<0?^-vYeb:)ԕâMi{jHFcpYk Y[])-.Ԅ('mDBگo~qK)q2ӷϰe[EwXƅpVR^TFH , WVƞ;o ɩZ!s*73`E7O]v&"٤RJ9;Y^{:Z#FQإRg3khp" űR}'dgf75RAC=0G%)邤Fl&ORD63IG"(-MR$edG.Πf$e ;(ǫsU̠ (F;Hgl$D!d ?|3G SętgtϖkIFP@؈Fz% ;ATA6?L5fc47v;̇ vbdp!S|q̲_vo}(C{5Fz8;ގLqfdnyC<^h|xDs{6?h|Sa;9'7B453"!Eڝa3e\bA֘ev;ݣE0;޳>M{'}qgf,0ܚ?85WwU :xmtd_} 2.o?VO~_xb0[hZq=*o^*4KQqN&v%MIL ^xuQ޽NҁkmQaSЌC4z@9^M6p4(XW''*&zg?}wӯh9{oSR@e\T[%a4GPkL!U7>QV- Im{H;cËT}pY ꄿ}{T-r`aɱfm(.B GPVF,z5e6PFzmaRd'яw9 =fڧh_cM#MLFl,N/s^T=)$hм}ڷ5ۧ).' ##k|U0 ot";X }^\'G%k FHL/t&q[/OC+3vgǴGyZZBߢF7I:{r/ &9x1 vɥtBٷ!(6E#s"\d΂ xBѹcx>DºOy~AbL3Jd}[8]@wv`I;i8lJ0bǰiҍx9pUcE2eUдp5Ҹ'~EhAwڝTȘf;-@s[awd`>2ٳˎ7.?~V/PrfWGe3'W~Jz VxYU]roCj*zf/3 5)ie6ӵFgآBC~݊Fꨯ_^?)PuߦL %{^suw%rvxdN8zvh Y]QOVkzDd7/FGAZ$_ߜ,ѐƂp1^ ):)0v_+-.:Y>^#B>vzվ;ZO)mX).HA;dR #yf@sCR=̚R{`Utғ:JΣ!CWIb1Ը 1g$\6zLrm5skH`EZZHZ6ЙoT' l%M݅yd8}[Cp7elўVGu.ghwnLcבXFOJmqdǛ EejH caWX K gϢ,AVRǠ dOg{=*kwvP n!W9IWxGQ~ &tkd؟K$gr:z 7)eXksSq"0d׌fhoi,X׋iعwnP@g?/dv7Gƅ%D3t QeWkv|sIڵK((ͨmy7ȔKmX4;[ɠackT;:mpJEoÇٲ^l_pN}'qgΉ!g3xbsgxIeԱnwvP0 9R?f vѝ$JUe=~mLvX>lȢHgqo*e\L^{Sn łKegv`Q<$ӰǵK+-גML^2)ɍˤ)qWύG6Ȩ4'>u]q}h{T@zH7A/icLj.%%^4 p'38h{QEÊ]JڃPNL:wƫlxq)ܖ^ F |. ]sJk<ݻee>\جMU*YTޱUkv(272 ]yڻn?w~󵛃=w-u1,-(of|4d^KEMhA٘a^c eJ{8_)xn} 3v쑿 ,gI6Շ '٬>YW=Ux:*1ηË Az8J9=GkO8F/ vE>N?Xv~0bZDʷ!?Ʃ)Gٶ)=n5}K0.-#rfr\\],7o0cK%Tw/.lE1|J`0?ɗsXK2dqd*z1Jl qu*pi)rf7NE<ʓl2R1GSI/-Cl7̗w+ ,sхijpLԄqjKdY *i +`ǶEsqPKɖ*ZPqja/Q xO%oА+ G߭v-Qш$xlTj,F,ؾX6P:yfPi7*MFӋoˍ,&5 !85IbD$\)#ӔaNsg6_=EԊn}6`ZoXL6[kxD*_v4L6$%O r_ jPPcDl>V퟿?Xo?[!_moj;BkW`~,N3s loMF#`Q ~)pzaܿ5HAKm@DD1JtJ7:DNVB [!4b9q)1Q"IQLP9M\FʐIccWIclS+y[cp#VI$ x_{*Yvgxhն8lx/%sRSp|wQ|wQ#SE񴙀5. X@ύ)n:wSCxzJC!G'YV[kj7-"Ӌ` nޖRgY8wP5UJk"xI-,>;r9UT PzX꛺+~FiT9T1C|?#~w} It9K!f'' eQռs@n>z|KWJg] i4dbA1]>dS h`vX?ߧjUUWmDvxf)LdzURߜLn~7>> ~-MJC KZ5d11s046;H3m+UscMfo2ifbei hF:&|dPNq4~Sn]}wm?bNc R9j )wۢ"ކͲ7T9n+(NS3,gf=8mCQ/ هi߶lGgg c?Jg͏2| .dg YJEb38{hW#N3slc!Em-s D[; ց|ȹ jblۀ70SB=tT+uA"I.4Nc eL]S0In4@L8f:1v!:TDBYy(öm1ß &TT'K*~"},U\1Rm]Y chY'O Hüq$O[Ѷd}x%R.jklRfנbSjIJœ[շg&9Z;4y*joJSK޳VخſMmqZmS?4ԏhԯUT|NN1@fʶt Po\!*j)f<z0Uzna0q e{֝Tv4JU;!TSx?bBhCxv=a|C.4ohZm9}Scj`R+]E?Ly O˻  ^*G"CR]GC(%/Ϧf^b?۶<#[K)SLh,7g!7Ǵ%.<ݚ ޭ l*n7[y=S 1·rO^zY7',ޙآgMg?Ճ= ,z.M7ONշkgm܃3}9fKj=Lno4rMaR Ru.!A.bg"JݚWRnU5CRԥQz(u sER- s\z(e yJI}SZ/=u*G1|u +~.jH._ [K@ dj[4jXT._j+{s|Gܨ̢?dQ8OZ$ZxAJµ;U+HJt(r;gss^#QʦQ=jXo\!u8 o?;th\ܖ?&ﴝOwnb4Ϣj6b+^pRˑB*{S<:6[U\:0&_ iFp" ε=FB$Gv>>^j*xAbOrDypR[5ae(/3 ŀ]ް,P;\*JH: HpZ"'|,/rC)VE@)Vn(-VT;w6J:w n埤LjPʊV*?}IDꡙ6Ĩ4rTaaOcCQ:T86.o(էQm\)$j|MRءI3=i.'"Ilb 1`phFY)IFBR6 #"9}|ŋ $84 ivÜQ$e?/ꔸ9h7NXB*-b9[2a=4׺Xk06f׷sԼNk1W4ךt^=,"5Ĺlx 9EnCh$"{0F: LOHkb#3{NP_̰iT2(QFrCNy߸S㼝ڵ,]sRKk3H$UMm#\!@ !8V<,99Mmu9hʼnҢc%)%c[V9x.rͰH O2*NHEiÙf""4L8˸\Rb2(r,UtlUECh$r-`a&N % >(%9ãDEQEbzkC0ƶZUQig6EkrDCF")4EkS( oO%Xߞ7^Ґ\*Ť[cl8A^:VRB@f:Ԇsۆk%;C`Ug|!@S.q1dD)qKl!Is$ ax +6 V|ضTñCT-Ʋ3TrV9սr;x𱖂$C!P>ݹwǃE!dJqRr/@5hBPЁ, X@|GҽƇgYvT>=U(Wp𱘧 eݞ[=;^j)= zn |Irx;mOs%Qu59#yWJݮZa;D"(գr}ڊ6suzu{?Lg6 _E?L<](_ظr?SwFxo]ڗgS3/~|џMmTMZd2;t[k[\,g0!Qx@X'5)˳!)Ɇx7.*{=zP@'>ޭC۶i[m^|ݚ@+h$&8t-T. w1V*neޭ r)PpI;@jșvi%}*oO6*ox 桼*}"I' .(I!5$B@-IP*/oFN͠0@6s2?p1yB?˧{TMy7b?.L2e0`uӜZB5/!UCI*#P{+s+.0"aB[5>pT(:-c |47ʂ!a_7{\ -e69 c1NK ᆖP/p*AK'FyZeAQn#X4' dm^YQۉ2+߳dXr9& ]+b,w|VrhLeZ#K\ʒ^yELk*(::"Xشde5SVc4f$"e(&ZkXrKT(KIAŇ5/AP{NL!$b4!P}"u3x؄aL3r9gvY6gQ7Gh)'THT:jL r&Prc3#+}fBMЭQ2~ϋqLKRx#YLIz\L)8cPijP{F h"JB)%ahF 2#ATX&PmI-D {.QxARRe)paQY,um-Ej+H)-+`BeC5Fxu:i{AH mH A Jg2T)Q3)M[f2ﴣg2͞]hc&SF1E=dJ)`L^#$Μ:b&S0(ׂnLƾKgmp"ucwnJƪD ö/d(ȞC٪[=5;&,xu5\=a dTG^% 1!צfT ×&J! 9=)=t @mX7,Rr;uw3#zȱs2st1ˮI;ׂqͱ)E`rӻ1B0jf t2ߑNϻe?z,䍛6 A7qGF|ú7(DnjwbrZ9]G(9ⶪX@j߮~ftVD( kޏnߟ7g߽&GCe޻YrMwU\hx+WOOw|[[ ! QCs} ){fޟ-pfox~y9YYjNmiB6d]8'Qp TO__aRy;%Tb`pk! \49Yjqp:|ޙXUW6do(S,QY)0^UR?:iFnkiPídl2)eD &0A #ۻЪ;+thy#2 x9{w qobGsai?];§ F4q `L!BB؜S~U 8{KHeǼtnlXQ J/R[%WXyZCVTՒg|r7 hRY<3&]A, I ׽4x3oYu"<hD+L`a_/9?>_]_)x=GΔz@AƱ4Be](k }P6ßLCo?Nt4gMu0GҸ[>e]h2h2h2h鲮\O(k-fR #O7GLfF!\0VtZa1C=qU{4h^`ܹݩBH`I`)SZZ%ҡ SYXWτ򂗀%sRX0 Ee 27M"0N0gAI±B(k^ PHcVtęrHd *j <7ae-APX<P?t{; J !Y*-4&̂ B!0}xh{g0븄2bQ?.lwE7UW}7D%ׄ{ {ƚVn\05E%zj7b^SBE)c߾8Tq|*7:WÐԲflq*D(tvZin#;;픾`5DvNRl$h@.:]Q^Q' Y8]QwuE ${DjCQwdIG2')bAO}Xej^@a3EAD0zU!_@3,ί[&S82(! .Za S"+JpiPQx(D%y)mYV%N;+]xW2%5jP.7.7QZm݆XD^:4S*]%(KWgEXKee\2gϐ.C34x 87 U9)̟Y6T#S|p(ʤ4pxƀ9~_} 0" DŽڟ7_/|Ű>3ݙG hxxr6@4 +''D߂9% ~W`l52Ip}j['&?͒t.޵_*kΡ@ X ẇ7rIU֑WK\*@sVhh{̓vA~k=ۏ#^0B<~|2Zn~< c?"4_чgY}N-<^ pƶh/4$uL[ZבIa^?e)W?^k^1bg[{5}eXUdC&&ggO5ӤVDu]T |191j HuKɴI].ՠFRM'ixq6poX #Aé URéY6 >IȂՆ /8#CGLdI0  2Ug [F6mh:j MX5Z~ż~2Y[:>OWww<ì>Mۇ0+[}q_<| WU5S'򰰻@eϒ}ސElQNc;g0#kqBǙ .o CE_d'd6(fՔ2'b㩾X}rGEd0fd}!tM쐓ȃ\70+N FY)@76+FEj@'+=B+e"JI ?A/_mI+ڊjz K}UHfH?Mf]zp4{fRQ)F%f, TR*?:iFj}i٠^#?rvuݵ{ eQnx|5sE('1HE*ơY }Lp<T}YSٮxzJ{ $X]O%.\԰]OWnTRݮBM­)0CBq%ɝ'cDS͘3hi"4֒+5m $`g8*hnػB.T8r\cXMw+ [V9@Ձ\R5cUR<8eʬ̶DJnWTL)+qyaXV?s CQ<:\6~#{ow+ص}Lǣ<0sYwy.ڝNVB քM3{PDIcq9n ˰\q־ҞfUzkn!kY1νtu9if+mݵKq?t%Q4ײedx'UAV*Oa7|tT`DKijSΉ2e;⎺sTy6wJGfܻxRM֒;  J{)?9 kCX*h" ,Doˣd{ج$/ϧr*n/wDRdB;,~yrwROFU~ybpA@ aXHƏσJQ:ж<Xʦr7ʻOGs r7CfFB#@CiyxHfPc+AK5䚍钛dd!?N0 W`q ѭnyr9؜.]8Js;[(ߐs[=CaM-N<&4պw47ehq եsS-LFAClTq2إZ̤k\8@65䶓u9̬4U"MҘM8a^w3Y<7 yrM my{p۳:^(r3yx8E{fVnܬ) =qKZeSeo{LWΕ=e"#$9"-L$wFXA tʾ#vk:f8ʗ~qnU 'hLe:⼫xfCAV.)<ڭI/BLck [!)b~\g(OA9 w=cGNM#9f>Eig2~Ǝ:/xпJ_}eXഓvT- r)ekMARmjLQz( X[y+ G`jȹi7RsZRJPjTF)n(e+X1JtCKJ(=a]}ކ]}⸫Z]z( <ZRPj6Ijg4J.V+qRG]jVXui72(e j8;QʩJ9­@}TvVO7zlE=y,Wv9.eB ?ʭн!Vqn0]zccRaXg)pePe%0_gzonۤ#\ ޽o>~" aH2gM$4\+7Ib")>="$V Bv* FDRwmZ$JAh~T)'e:o6T3SpiՂAP)@d-+lcݢyBn4M[u)JbAӾbr(ճ'9Dv Ӈj216l3l2%">ĺLh7 uCJ(>V92/BJJN:G"..x}*PID XT<$yC`}$yUXYLI9+GhB(l"}=pM=>Eż? b9׆r2}Na޴sy 2kA.ڭJ8~i=OJ!hBc-XC {}}!r#qNE wu(ex$y&$7lԁب#H#EX2&/)h EZpjpb_;ll=8e-3X d#:gp=_zWk0EizO(ųI9=AvԎdG\+8!FPT,HE)E؋(T̚RS0?t,𶴩cT1#4{FؑFqF"(- xԱTTD`6n(2l<{m-ތӰ5F"){=15fb}}8#[v?E"EZb9yŧo>y.27 |E3{.*@7Yl"mH >r05u,gP) oyg*tn[<mo-)Ĝt3S;+ne`W`tꪴ 5ả6Ƽ'αx1Ts[yMɂL0܉ SGpW)!G`dmv6`Ϡ;DjZm3TW,6)շXdzeX :k Gh% ,Dgߏ;J8{sGPNy;|Ut+-!3]$* Pr]ɸOkT+Q$7TGmh;V.=ʾ#O۱)x+:!?9DKaJ-6T$:֕|G͹ S-kX:ŵb!ZSV6]&atJ#vknOIt}ڭtܴv!?9DK`JLӋv eeؽ]+RʃY}9LEʢ̩LL fB9""zȴnRڰ3dsx$:RAlmE:I!6a ?|Ji1Hp%1GG(Pb#9CJ = Zhz13EL񱾒2Ltǟ T>A9;/h>OrVyF+m\LG i p_{^g"4N"v;(X(^96~$v]e{ٛS~e:\_Ѐn|9Z$$sQhee,hG΀d2YJgW x|d|h x7er6 F/EGkio/2C;yګL#KݜOvzq𴾼Z c۝j*uG[ 䉦fIʝJ@Gn؛][Y(GYG[1esr4gMexպy9de5ڑjXR. eGQS($V ?bjQӑbpVt:z2vJ(m]꒿iʱb"z}N2{XV>b!6.<JAB)>="mzk[$\59OC6BkziuC`8Zi*r)뱵~J= v W8 .CwUn(CX8L)<؃zlIcOÖT!9ih-Nh/9])$j]XњڟW|aE~D!(nyٚg?@4=N۳G"L8qT7쐻˩кz.9|%!aqƱ'IB=ˈ>I"B OHg9Y :R˳>qbCGq )f;X2Da, H9d~8B8'4㱯KqQoT~2d a-2b:vk_LyӱȻ籩,nbF2[ 0WW^&CpFV&$Ώs4U} KTR#%}W瓙? HَKE 0 G"hwOl`AR߰7YOB90BS耈Wk3*Lgwef 믯ygs}1Jի>ӫdMdpl8_@χ5[̅?./pÛQyp3b `OGּtu^/~\&A ߳[ѭ9 /—ѷ~ѥd7R|d}Fp<5{6~5lP*CYlEIt5@.WۦzQUrjT׫.K/q| Fɧv8oJcb]N^_MF[ Do.~0΅O@I3o'/^|Q /_/y2p+Fai%W3L_4V23>h4~`x߿&Olہ_PJ,M0`r;a'd$BrDo/ TsP>0Sf%xF&تI͜O>7>[ẎQ\2L#K4"shs:F?\/`hk0OafM+dzm h _V!N@@U3q]K3?ii25߿_X5mB}J2d91&'chsDv=L8*pvīqsI`Ύ?;gG숟#ޠ#k妄#ZG\rN#ư(ltG+]M9F#0D'ͻD3C]"}a\9J I2~yO]COd5$ %'{Dtsquqg@[f *Td o S(٢p0QDUȬ#(xu(F~,Hˈt=퓮cXHa=U|̄ymoWj&e>I&nE7ķ/6k[m&[o&g?r1A9G?Ϯd$ȱd]s6Wenw@!e+UW͗R hk#Y^=6IG Ar82ĎL5ݍ~L[K9Udrtu]1R@LS7(z.4]Wp!ZK.0>-zJt|kH Y v) dVW⠞8Bq<yU(z GsNYVhkpY?N )f?,6N˻rHz;VC9c%a5`x'kI%gF..$#i)5Z1:kFP0!|o3?L5+l) B:cL!7Yvh`)0Jq(Un%j"^cRUo;s 0v;bƷύ3X؂zrBQ ? I,/ʲnn'sv;g,DCiFb]T_{j5<θ]Mzv~\͑S\kM^ Sö}-3^f] ~zpwGt(PƔJ>u=\'8@2'jƒ33)./̫է_yëշ7Đx% !I'mVl'Izҩt'Izԓ6?%M'z.t"M݄ujj쩛P݄JCd&ЅT6eEN@V3爙9q&ǂL3%|jwAzʱKct'^;JEPDղKgjdD>^^ͰvڵR\-#+I<\NJXN(+lg.wY궒C귇.{s1>jBN_WnvrώpQtVbI1;~H.\Hڨff3] |CKYx},T $u,$@29?Fɛc<<%xEn.#g_ {>VX#Fv{} *rA+2J.)di$iZVE'˪H?p8[:VƟ 6uFDG80)J<3miQx϶f>F<o!^RM0P-( "eG kGɻKEm`XS|_u{q^.ܩߴ'ksV.$$JqC箞b o^ۋB*ˆ i_|\zMNfs`?s̉@g@y7KXγz{/j=: Gb8Z~gcIP&~ kҬ'<&}V/;868q@9ᴒgOn uk 9:^@.b]Tǘ`(A Ir2 2d N-1tX8P,Vd)7uid_܂2¹2w< Y&)Jv:iE)ciP Fki8nrj-/H5]V6e8JIZLHRCEzds#JδyJũa+trLK`#ӂ }~϶fKid܂b:3gфΌΕ![Pٯ!4ݰC4x*b,_zm˻?װLǃ~38{'Xقrƕ׋>>sBU握wopqs{Be =O=zGF-8P!ޤn_)=7-p2r̮WO]aaZINw1)0Sɲm/]楂q7)}+kgo П;y) _홸\^׀h5K[ M$JdQ5rpD[w zH҅EhDoeN8#(&PL7g0'V;QHoqQ1wSkL|H흗MMmSV.*>77yxEnσ^1,W (OQ2X\ SzWz B"BWeGZæUB3бGd j>@wL5r=VRQyZRg+6OxNsS& &!n yQVƒ =a,֓껋[YĪ[p]O:7C (V8z^g.,dt2sJ)қrj4iߔ x yGHaȭd |M#Ƈ@I|פ]5xu[ChR0$|n(J i T326ɹxf^6ִ|xlwݍwߗeVe9?̃sn#oggHs7圆?oF<'زSk.C,V o\DSdI4'cn<o4nx/1vKOԌn]H7.I2pՂ1hTĈN7hzSiҥHօ|"!S-$Ȯ6o<7&]]b5kÕGhoM!XmBRɨƜBr2rH#͡@Ty#4f݌;WyQt︹ŋBT\ XBS5H Sk'v2rL(>XJ"h٠՞6@-Gjd N-K﨡t<䂄}yY&,6En7X03cd>S9+qov6b`t(f|c,=HbpafT;VTnpItQMFGMTUc%]CkLm mTYG@(tgT蓔bd%-G!Y[R-$t1K)\7ÈTmR-;IQKi\eZPEG!>ۤoI2RN%+RqRZRNvqK)qREKQH>6($G-ycyXJ1$G,,+*i!p v)lZ29yO- B&r{EbS;lRO*}XsJw;wdrc@sWj9?M>J7?fŇOWe ՁIz)J$/a^!p]⾄iw$PAI"vR)>N:pt g<&"W/a$LCN _`tQZE!߁(Gdw [͡M*riFҽJo5I~ת9DF+Ew~1s7\__nYq/R+<#\g7s%2%W11/.ǤKO#@pl͚(pjq@nӃպD6 b iSkT燦-Ȧ-JuoL!ٖ"1s_k|( +^xg &Gp]Jn>i&hصOqK4q||v8G~}QGK4I8H1Rڟnt1rGwwO=(Ōi#"ӖGZ[k$)HfLi2 ׊6yިHTWƺbi@) 'A}16/2_#!Vϝ1Nz; +%u-/&ɬܔ!0@-{Hb o>1mрX E_k{ҴWwG4\ i:WD9h#7yS˺qg>DJiky A4ϫzKAg*9pjsgHEqD+܂ !<]~yWBpJ~?-o5B:zYr7:AêG٧"OxkiV(I]-dP9+Ȅj pa ),9U HVlQ0Yl3Nڈ!I=Fqo%7X V4^,5Bpȵ %yzP*c3$+@b?bT%)wJfiQpk7-6H6B խCU'wu47!UʤȞ̡%_xd,.4Л4}])/>/{?_`"}s-]UBY;g)߄<[|KS ~\_%s ƻm\,6g+mqNjn֦k11$TI| Ю1CLC Ƹ!ęLr+قPCeEP8hkx˕`AuX'Z!ԮC7;+F%g aVn3у]Y) 9d*g- K\I}XjR^@,s"yJKP9G{1/8f9};T1L6"3ޕ55$鿂66"lk^jc,-NP!%  L!(@"Ï=݃bab[0Lv]~r3y"+̯fZ<}Զo|OC}1&Zuczy=Ť!z&\Vt~b*&h4O*l!_FdwBb?=#p`d.jxl@Dph8 Bn.:b|" }aCYyP%HG޴6Z.G&d)¬D$*, !x4nfO ٻnPE5UW nƔUԊ^z=itQ^@'Dե9<4?=: y7^SJZ0DY=שV<&GٻnPC~Jʶ=3ߘN㼓Ú3koKi#0QM9t~+\/<0wd;^x=L=kE o4RXN8pQ V4Df%s|eS˭Z/v#ks-X{r-5D;x}MG t5;~.Ul?5kk<^}NOP|kmGWd iQ [Z][Υ~$zǕn%ZJ؏bs;v"!&O.,S=2M/BMrF1j! yJ癳sϳO[3d'>boL0s_*S Y!uՐ(XK6[+ʬ 8`L)Ϛ壺fݺŬ/z`<'=(>+^꙰+xu֌{GvDr2~n&Sg/΍EŦϻd\{1g5M=/FE{<տ~>;#?h3A"N%UQ̪WOY~EbryZ7 䳣 |vvNFCwxQ΋^%/o\V.׌M.f^x0Kqh"9_wFk|ܢzRh6ǸI<<7匹/|=^ 4L坂dUtK&n-=YU'BDQr GDň?`Q"\F_Q6xuֹQ^1u 1!3YAdEMјaBTԳRo" Ɉ QQHDB!A)M 9Y?$_ .H) SLvWCu h\J0zejހ _]b=]XJW*1OuE ^S.M>IyGxHfZ^XfPE.K %nuw..DTjn5T*5fosruf [VnRL~/7Eyqw3.1?7R79ntKlD+jrmņ X@8I\3͍)B [Gɹ]y&)DBPmZ2PmYZ֍bSӀ f[&,iֹFm2YF [V0Y) *C|ȼVq'rJ7:u.*c4(RBK !Ǐf|K&AQpw4Jf12K)Mf@C~pmS콍~n֖%M.ර2W[h+|=fJT^Duھ d 7;Cք[:e$3|/xJw e7f0x|]rٲM%#-WHȔS^LF/Ne=vCr{eL1!TJBHr%Ű>e[nN$'KGϐ#(>Ld{6H$?<RW͆˫QlPX aM; QllM [rm֖DHxo7v|Q 2ܨ=I೸RDU`ĝI#yd0?=Z|-zpPz-t Qth P D&(LS]δ"RE;DÓōL-SU1f{ >c(2aVc|?]_o&>iσ3!!%?FZg4# A$QwF|>н 4.fv'їIH'N,@T$d"[mPQnr)nה_L7O$:(T\~U ^'a a7,NoQp&7BH#D_nnĸF#A6KB^Q+(,ΈFpr1p/p7/>REBTDlsı"}Y!um{Y>]7ClSΫh2y 'D08IFxD‡IGϨHQXvKH$佲4 }WS Frpm@YಒL`F+?XT猲"vE>'\ބ?W@3~.g/̅ ]_lG<,!bL7WwF/7kqHLL[8i1Lɮh\ ?WD~0awĀi t\fWܝD{#q \@ h_Zy!|$aeȿw^h#$8c5UBaf)Jc 83@9hlvȀCߦ`+MqwI! U_ ڣ&m jݟį1x[iL*0p#Q ;pJ B&}Ǜ\p^ћ/S!M3j_aM н:7A~t5-q<BQ _Ϧƃc +甅_=UJJޞ Ns?)#|tnqe %>s58Ct*W s+KL fqfŽB'T)1V(o<{ qU(/T_)?,{K>soR"hV" MAPhߩh73̎0x*(@8yCiFDM|(1'%X>/ ;ws4x/txaWt^[(l~&nd\%TvgMR+w-}}Z*>p9?7țpTufJk/v pۿLuU:ߟ/O*G0D# QXd0Uސ^cf`0?,| Fk1_nƕ!ˆU_A[re:?OCX rrzy!|1WA֒OHdK->Nf}KN\' ogxP \Hc˜ $i#<_`9ȻgK5mrԓNk^d<{D7`DN@KJk@GO ֖t{H IG@hgr'!3YAij(ib=Z7Zi!Tfo.U;K\({!UށB%=" l y} T/rЄ~?wq E, 4yah嫭4_*~OȯB|)e2 \c,i|QЀ`F@WFiK R1/wR h 4'Oͬ h8"H`2|N\-cշRzNήUf(!Dq%2 \ !HDRK(k 渌`>b/ z_ Q((6RvgFtr=mF:tKh @\q,Xֲ.h̚ήhy5^Zॄbi2DXPXO0p#ZŜ e-A"|DwLmbV":Hr AA xCD#=r*#D~U n#ߧx^-A9FRTP9#VCc=@I'߀PBTkBl@#"943=$J(K %lMEF9%7"#+m&qtaC{qBHiȭzе)o@KUZn*Kkd(FoM 3-c#IKcM+YZ,5ۻm,W8~v2M36/͎@YT$^eJlҤbr"k(9ppR 'FQ!*Q6AjO:FW hBt wh!2vG !aAHN_1J9%"O5M$74\+qmGgPa b@CErNQhP2|L삞w.֌-P P|ˑ%`9S8\ P[$:1ÞKB7U n)L/t r@ 'jo\.ތ/#6fAεS$@ca6YpՖq?FX"~2  c1Ҡ)%0i'&s fʚo߾^b1aR[A? ] g0ڦYއb@=0;pClslgu %x 0o*t.7Nj0aRw1Pл5q$+&滟/ юRiwQ1p *0 8=_n7۠WZ;Jo8!+acX"$JX/D2t4p%}SؒB!,o'rsA #-&Tr l9X" ŻN(ƜJ~\rQ@H)TR#R<`PS%՞qJՄF_Bp&%z?V_,EteRi h%PZ4@+BfBA0fk  ]5J, b"$q1>T7SPA'&-~ȭ;]LEJ Y.<\6_ _כ5eF{pe:$jְld]պG|?jaowXẁd-&Nw~뻓hϧƝIZ{v{KjCkY}gtaђ2Jqp}D*ttǤ}9vZFJiǓv!ZbOcǘX!<͝Vjajt,13kT>pC+"Z:-~yϟvB%;R-{-mZ0b i`<:mol: "S<胫$J-6d:3,?5u%0r7.)yƙ ~ˮ_ GQ}JI߲܅W/hӕ1#s |&-tė`ޔ(y~7Q'cץ+6vy~)kclc' aG߳C^3036|l}h.B~ONקuޟGKϨZʯ|Й/}Z[a 2˖lJsʑ7\M&;s]v5W*Ð#hNkkL~~%[W4Ngn/2wnWF>r}6(\Q]m\*~;3DtrY~ŅQhTO Ӫ>t"6J^/n\ 8&F= EG)OҬpfˢbc|U6 5+3pվ*O,^E剻>'-</br:",GR(9tz\\nTL+NJN,?Sg!yޞu02C.KFO٥kV}A.|_ ߞ\J~ >W·%|[#˚+_ nޫy%7 g'7Lj5G*'y<6YM&H+3Bes3hsZdd)q_(FCMۗ(qZgrl^B6 ˺VKt=mMߡ:~Kõ(IR1 AUV%'&%Z#BPXS̼v1YͨXſ55var/home/core/zuul-output/logs/kubelet.log0000644000000000000000004071260415136703205017703 0ustar rootrootJan 29 15:10:35 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 15:10:35 crc restorecon[4590]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:35 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:10:36 crc restorecon[4590]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:10:36 crc restorecon[4590]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 15:10:37 crc kubenswrapper[4757]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:10:37 crc kubenswrapper[4757]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 15:10:37 crc kubenswrapper[4757]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:10:37 crc kubenswrapper[4757]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:10:37 crc kubenswrapper[4757]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 15:10:37 crc kubenswrapper[4757]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.161746 4757 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167752 4757 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167798 4757 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167812 4757 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167824 4757 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167834 4757 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167846 4757 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167856 4757 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167866 4757 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167875 4757 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167885 4757 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167895 4757 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167904 4757 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167912 4757 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167937 4757 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167945 4757 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167954 4757 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167961 4757 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167969 4757 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167977 4757 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167985 4757 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.167993 4757 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168001 4757 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168008 4757 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168017 4757 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168025 4757 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168036 4757 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168047 4757 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168056 4757 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168065 4757 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168075 4757 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168084 4757 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168093 4757 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168103 4757 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168111 4757 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168119 4757 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168128 4757 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168136 4757 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168144 4757 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168152 4757 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168160 4757 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168168 4757 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168176 4757 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168183 4757 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168192 4757 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168202 4757 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168213 4757 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168221 4757 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168232 4757 feature_gate.go:330] unrecognized feature gate: Example Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168241 4757 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168249 4757 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168257 4757 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168295 4757 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168305 4757 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168313 4757 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168321 4757 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168329 4757 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168338 4757 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168346 4757 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168355 4757 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168363 4757 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168371 4757 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168379 4757 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168387 4757 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168395 4757 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168403 4757 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168411 4757 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168419 4757 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168579 4757 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168592 4757 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168603 4757 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.168613 4757 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170840 4757 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170868 4757 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170877 4757 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170884 4757 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170891 4757 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170895 4757 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170901 4757 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170906 4757 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170911 4757 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170915 4757 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170920 4757 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170924 4757 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170929 4757 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170934 4757 flags.go:64] FLAG: --cgroup-root="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170938 4757 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170942 4757 flags.go:64] FLAG: --client-ca-file="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170946 4757 flags.go:64] FLAG: --cloud-config="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170950 4757 flags.go:64] FLAG: --cloud-provider="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170956 4757 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170962 4757 flags.go:64] FLAG: --cluster-domain="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170966 4757 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170971 4757 flags.go:64] FLAG: --config-dir="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170976 4757 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170981 4757 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170987 4757 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170992 4757 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.170997 4757 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171002 4757 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171007 4757 flags.go:64] FLAG: --contention-profiling="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171012 4757 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171016 4757 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171021 4757 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171025 4757 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171032 4757 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171036 4757 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171040 4757 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171045 4757 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171050 4757 flags.go:64] FLAG: --enable-server="true" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171054 4757 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171060 4757 flags.go:64] FLAG: --event-burst="100" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171064 4757 flags.go:64] FLAG: --event-qps="50" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171069 4757 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171073 4757 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171078 4757 flags.go:64] FLAG: --eviction-hard="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171083 4757 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171087 4757 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171092 4757 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171096 4757 flags.go:64] FLAG: --eviction-soft="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171100 4757 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171105 4757 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171109 4757 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171114 4757 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171118 4757 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171122 4757 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171127 4757 flags.go:64] FLAG: --feature-gates="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171132 4757 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171136 4757 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171141 4757 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171145 4757 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171150 4757 flags.go:64] FLAG: --healthz-port="10248" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171155 4757 flags.go:64] FLAG: --help="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171159 4757 flags.go:64] FLAG: --hostname-override="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171164 4757 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171168 4757 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171172 4757 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171176 4757 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171180 4757 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171184 4757 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171188 4757 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171192 4757 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171197 4757 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171201 4757 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171210 4757 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171214 4757 flags.go:64] FLAG: --kube-reserved="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171218 4757 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171222 4757 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171226 4757 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171230 4757 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171235 4757 flags.go:64] FLAG: --lock-file="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171239 4757 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171243 4757 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171247 4757 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171254 4757 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171258 4757 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171262 4757 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171280 4757 flags.go:64] FLAG: --logging-format="text" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171284 4757 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171288 4757 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171293 4757 flags.go:64] FLAG: --manifest-url="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171301 4757 flags.go:64] FLAG: --manifest-url-header="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171308 4757 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171312 4757 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171318 4757 flags.go:64] FLAG: --max-pods="110" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171323 4757 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171327 4757 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171331 4757 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171335 4757 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171339 4757 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171343 4757 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171348 4757 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171359 4757 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171363 4757 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171367 4757 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171372 4757 flags.go:64] FLAG: --pod-cidr="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171377 4757 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171385 4757 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171391 4757 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171396 4757 flags.go:64] FLAG: --pods-per-core="0" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171402 4757 flags.go:64] FLAG: --port="10250" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171407 4757 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171412 4757 flags.go:64] FLAG: --provider-id="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171417 4757 flags.go:64] FLAG: --qos-reserved="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171422 4757 flags.go:64] FLAG: --read-only-port="10255" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171427 4757 flags.go:64] FLAG: --register-node="true" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171433 4757 flags.go:64] FLAG: --register-schedulable="true" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171438 4757 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171449 4757 flags.go:64] FLAG: --registry-burst="10" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171454 4757 flags.go:64] FLAG: --registry-qps="5" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171458 4757 flags.go:64] FLAG: --reserved-cpus="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171462 4757 flags.go:64] FLAG: --reserved-memory="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171467 4757 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171472 4757 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171476 4757 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171480 4757 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171484 4757 flags.go:64] FLAG: --runonce="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171488 4757 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171493 4757 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171497 4757 flags.go:64] FLAG: --seccomp-default="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171502 4757 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171506 4757 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171510 4757 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171523 4757 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171534 4757 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171540 4757 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171545 4757 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171551 4757 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171556 4757 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171561 4757 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171566 4757 flags.go:64] FLAG: --system-cgroups="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171570 4757 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171577 4757 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171581 4757 flags.go:64] FLAG: --tls-cert-file="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171586 4757 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171592 4757 flags.go:64] FLAG: --tls-min-version="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171597 4757 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171601 4757 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171613 4757 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171618 4757 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171622 4757 flags.go:64] FLAG: --v="2" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171629 4757 flags.go:64] FLAG: --version="false" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171636 4757 flags.go:64] FLAG: --vmodule="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171642 4757 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.171646 4757 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171768 4757 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171775 4757 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171779 4757 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171784 4757 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171788 4757 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171793 4757 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171798 4757 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171802 4757 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171806 4757 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171811 4757 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171816 4757 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171821 4757 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171826 4757 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171830 4757 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171834 4757 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171839 4757 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171844 4757 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171850 4757 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171856 4757 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171860 4757 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171865 4757 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171869 4757 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171873 4757 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171878 4757 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171882 4757 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171890 4757 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171894 4757 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171899 4757 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171903 4757 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171907 4757 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171912 4757 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171920 4757 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171930 4757 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171935 4757 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171940 4757 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171944 4757 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171948 4757 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171951 4757 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171955 4757 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171958 4757 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171962 4757 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171966 4757 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171970 4757 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171973 4757 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171977 4757 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171981 4757 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171985 4757 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171989 4757 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171993 4757 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.171996 4757 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172001 4757 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172006 4757 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172010 4757 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172014 4757 feature_gate.go:330] unrecognized feature gate: Example Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172017 4757 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172021 4757 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172024 4757 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172031 4757 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172035 4757 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172038 4757 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172042 4757 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172045 4757 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172050 4757 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172056 4757 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172060 4757 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172064 4757 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172068 4757 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172071 4757 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172075 4757 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172079 4757 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.172082 4757 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.172901 4757 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.185803 4757 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.185849 4757 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.185962 4757 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.185973 4757 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.185979 4757 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.185986 4757 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.185993 4757 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.185998 4757 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186003 4757 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186008 4757 feature_gate.go:330] unrecognized feature gate: Example Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186012 4757 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186017 4757 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186022 4757 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186026 4757 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186031 4757 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186036 4757 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186041 4757 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186046 4757 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186049 4757 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186053 4757 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186057 4757 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186060 4757 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186064 4757 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186069 4757 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186074 4757 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186079 4757 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186083 4757 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186087 4757 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186090 4757 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186094 4757 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186098 4757 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186101 4757 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186105 4757 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186110 4757 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186114 4757 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186119 4757 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186123 4757 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186127 4757 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186131 4757 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186135 4757 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186139 4757 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186144 4757 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186147 4757 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186151 4757 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186156 4757 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186162 4757 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186166 4757 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186170 4757 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186174 4757 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186178 4757 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186182 4757 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186186 4757 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186219 4757 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186225 4757 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186230 4757 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186234 4757 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186239 4757 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186244 4757 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186249 4757 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186254 4757 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186258 4757 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186278 4757 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186283 4757 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186287 4757 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186290 4757 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186294 4757 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186298 4757 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186302 4757 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186307 4757 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186310 4757 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186314 4757 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186319 4757 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186324 4757 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.186331 4757 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186472 4757 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186485 4757 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186490 4757 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186496 4757 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186501 4757 feature_gate.go:330] unrecognized feature gate: Example Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186506 4757 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186511 4757 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186516 4757 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186520 4757 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186525 4757 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186531 4757 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186536 4757 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186541 4757 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186547 4757 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186552 4757 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186557 4757 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186561 4757 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186567 4757 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186571 4757 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186576 4757 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186580 4757 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186585 4757 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186589 4757 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186594 4757 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186599 4757 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186604 4757 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186608 4757 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186615 4757 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186620 4757 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186624 4757 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186629 4757 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186634 4757 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186638 4757 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186643 4757 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186647 4757 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186652 4757 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186656 4757 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186661 4757 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186665 4757 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186670 4757 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186674 4757 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186679 4757 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186684 4757 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186689 4757 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186695 4757 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186702 4757 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186708 4757 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186713 4757 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186719 4757 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186724 4757 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186728 4757 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186734 4757 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186739 4757 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186743 4757 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186748 4757 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186753 4757 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186757 4757 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186764 4757 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186771 4757 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186775 4757 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186781 4757 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186786 4757 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186791 4757 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186797 4757 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186802 4757 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186807 4757 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186812 4757 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186818 4757 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186823 4757 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186828 4757 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.186832 4757 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.186839 4757 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.187915 4757 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.193027 4757 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.193135 4757 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.195734 4757 server.go:997] "Starting client certificate rotation" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.195767 4757 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.195948 4757 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-20 15:50:45.529970371 +0000 UTC Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.196039 4757 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.218148 4757 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.219708 4757 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.220688 4757 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.237153 4757 log.go:25] "Validated CRI v1 runtime API" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.270472 4757 log.go:25] "Validated CRI v1 image API" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.273594 4757 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.277370 4757 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-15-02-29-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.277407 4757 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.293046 4757 manager.go:217] Machine: {Timestamp:2026-01-29 15:10:37.291061708 +0000 UTC m=+0.580311965 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199476736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:5f377355-ee96-4ac8-8c1b-9d23158e8b01 BootID:2b0cb187-65d3-4368-92b4-54568692447c Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599738368 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:95:19:8a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:95:19:8a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:3c:c5:5f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:bb:66:82 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c8:cc:f6 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c6:e6:e4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:8a:dc:8c:6d:1b:82 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:3e:b4:ff:79:1e:e3 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199476736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.293339 4757 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.293522 4757 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.296860 4757 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.297120 4757 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.297208 4757 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.297478 4757 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.297530 4757 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.298420 4757 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.298508 4757 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.298830 4757 state_mem.go:36] "Initialized new in-memory state store" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.298958 4757 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.302739 4757 kubelet.go:418] "Attempting to sync node with API server" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.302829 4757 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.302903 4757 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.302972 4757 kubelet.go:324] "Adding apiserver pod source" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.303028 4757 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.308813 4757 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.310831 4757 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.312302 4757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.312403 4757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.313819 4757 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.314018 4757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.314110 4757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.315525 4757 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.315584 4757 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.315603 4757 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.315616 4757 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.315635 4757 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.315646 4757 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.315655 4757 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.315670 4757 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.315732 4757 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.315744 4757 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.315757 4757 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.315767 4757 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.315791 4757 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.316321 4757 server.go:1280] "Started kubelet" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.316470 4757 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.316808 4757 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.317610 4757 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.318246 4757 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 15:10:37 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.319981 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.320065 4757 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.320609 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:17:25.953527183 +0000 UTC Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.322134 4757 factory.go:55] Registering systemd factory Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.322173 4757 factory.go:221] Registration of the systemd container factory successfully Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.322440 4757 server.go:460] "Adding debug handlers to kubelet server" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.323578 4757 factory.go:153] Registering CRI-O factory Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.323599 4757 factory.go:221] Registration of the crio container factory successfully Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.323662 4757 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.323683 4757 factory.go:103] Registering Raw factory Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.323697 4757 manager.go:1196] Started watching for new ooms in manager Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.324354 4757 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.324465 4757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="200ms" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.327861 4757 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.328234 4757 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.328157 4757 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.328719 4757 manager.go:319] Starting recovery of all containers Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.324249 4757 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f3c431072319c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:10:37.316288924 +0000 UTC m=+0.605539181,LastTimestamp:2026-01-29 15:10:37.316288924 +0000 UTC m=+0.605539181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.330961 4757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.331088 4757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344591 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344640 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344653 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344662 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344671 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344681 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344691 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344708 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344721 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344733 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344744 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344755 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344766 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344779 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344789 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344799 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344810 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344820 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344861 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344871 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344883 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344894 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344905 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344915 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344924 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344963 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344975 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344985 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.344994 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345005 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345030 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345040 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345050 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345061 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345070 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345081 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345092 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345102 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345111 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345121 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345132 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345158 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345169 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345179 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345190 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345202 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345213 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345224 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345234 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345245 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345256 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345293 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345309 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345321 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345337 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345348 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345358 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345368 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345378 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345388 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345398 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345409 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345421 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345430 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345440 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345450 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345460 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345470 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345480 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345489 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345499 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345510 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345520 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345533 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345545 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345563 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345575 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345589 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345600 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345610 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345621 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345638 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345658 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345673 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345687 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345698 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345710 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345720 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345730 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345741 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345750 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345760 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345772 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345785 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345803 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345823 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345841 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345857 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345881 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345902 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345916 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345933 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345947 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345962 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345981 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.345996 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346013 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346027 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346043 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346058 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346101 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346119 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346134 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346153 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346167 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346178 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346190 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346202 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346213 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346223 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346233 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346243 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346253 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346280 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346295 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346328 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346346 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346360 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346373 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346386 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346396 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346407 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346419 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346428 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346438 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346449 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.346461 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349491 4757 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349534 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349552 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349568 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349585 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349603 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349617 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349631 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349643 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349657 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349672 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349685 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349698 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349711 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349728 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349742 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349758 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349771 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349785 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349799 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349818 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349831 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349846 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349858 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349869 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349882 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349894 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349906 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349927 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349939 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349953 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349965 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349977 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349988 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.349998 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350011 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350023 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350034 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350046 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350057 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350098 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350109 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350122 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350133 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350143 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350154 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350169 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350180 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350191 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350203 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350215 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350227 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350250 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350284 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350300 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350313 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350329 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350382 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350397 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350411 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350424 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350438 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350451 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350486 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350497 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350507 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350545 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350567 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350584 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350597 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350612 4757 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350623 4757 reconstruct.go:97] "Volume reconstruction finished" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.350630 4757 reconciler.go:26] "Reconciler: start to sync state" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.357888 4757 manager.go:324] Recovery completed Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.367106 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.369141 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.369200 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.369216 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.371372 4757 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.371403 4757 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.371432 4757 state_mem.go:36] "Initialized new in-memory state store" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.390935 4757 policy_none.go:49] "None policy: Start" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.391785 4757 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.391815 4757 state_mem.go:35] "Initializing new in-memory state store" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.392605 4757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.394952 4757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.394995 4757 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.395023 4757 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.395070 4757 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.395967 4757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.396039 4757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.424658 4757 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.446245 4757 manager.go:334] "Starting Device Plugin manager" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.446336 4757 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.446351 4757 server.go:79] "Starting device plugin registration server" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.447216 4757 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.447240 4757 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.447445 4757 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.447605 4757 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.447615 4757 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.457189 4757 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.495661 4757 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.495814 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.496969 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.496998 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.497027 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.497158 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.497385 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.497425 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.498082 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.498128 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.498144 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.498390 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.498556 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.498608 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.499507 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.499541 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.499554 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.500766 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.500809 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.500885 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.501390 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.502262 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.502368 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.503837 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.503875 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.503892 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.503842 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.503924 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.503934 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.503949 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.503967 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.504020 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.504034 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.504140 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.504182 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.505031 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.505061 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.505072 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.505142 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.505159 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.505166 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.505237 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.505284 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.505864 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.505885 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.505893 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.525421 4757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="400ms" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.548005 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.549109 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.549144 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.549156 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.549183 4757 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.549729 4757 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.552814 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.552856 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.552883 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.552905 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.552931 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.553015 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.553050 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.553083 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.553160 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.553219 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.553249 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.553289 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.553309 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.553334 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.553393 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654001 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654052 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654070 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654086 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654105 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654123 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654189 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654206 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654238 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654285 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654259 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654298 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654345 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654423 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654326 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654341 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654511 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654544 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654560 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654582 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654599 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654619 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654641 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654657 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654670 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654677 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654703 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654710 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654752 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.654806 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.750132 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.752585 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.752649 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.752662 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.752709 4757 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.753441 4757 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.835040 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.856057 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.862997 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.878960 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-bd10118d1068f4bcbd5b228422e2ce801a6f6ebf2680ae47bb9912c775be08d0 WatchSource:0}: Error finding container bd10118d1068f4bcbd5b228422e2ce801a6f6ebf2680ae47bb9912c775be08d0: Status 404 returned error can't find the container with id bd10118d1068f4bcbd5b228422e2ce801a6f6ebf2680ae47bb9912c775be08d0 Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.880507 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: I0129 15:10:37.884416 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.901869 4757 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f3c431072319c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:10:37.316288924 +0000 UTC m=+0.605539181,LastTimestamp:2026-01-29 15:10:37.316288924 +0000 UTC m=+0.605539181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.902002 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ab97cb01ee26f56875cb979cb25260dc4baef770ebba0af0db0fa3cc02abf443 WatchSource:0}: Error finding container ab97cb01ee26f56875cb979cb25260dc4baef770ebba0af0db0fa3cc02abf443: Status 404 returned error can't find the container with id ab97cb01ee26f56875cb979cb25260dc4baef770ebba0af0db0fa3cc02abf443 Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.904871 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d771e811df825c6e722d97b57bca99d2556ab777722f2834f6dd2bbe43cb3c7f WatchSource:0}: Error finding container d771e811df825c6e722d97b57bca99d2556ab777722f2834f6dd2bbe43cb3c7f: Status 404 returned error can't find the container with id d771e811df825c6e722d97b57bca99d2556ab777722f2834f6dd2bbe43cb3c7f Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.906964 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a6b4b272cc69c7c91870bc54405f67d04d301f83f0e6047b7f9cd2537ebb76be WatchSource:0}: Error finding container a6b4b272cc69c7c91870bc54405f67d04d301f83f0e6047b7f9cd2537ebb76be: Status 404 returned error can't find the container with id a6b4b272cc69c7c91870bc54405f67d04d301f83f0e6047b7f9cd2537ebb76be Jan 29 15:10:37 crc kubenswrapper[4757]: W0129 15:10:37.915887 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2563a049dd9fa153627f54cab4ba669937c9f20da7743c7cae2f036019b5d179 WatchSource:0}: Error finding container 2563a049dd9fa153627f54cab4ba669937c9f20da7743c7cae2f036019b5d179: Status 404 returned error can't find the container with id 2563a049dd9fa153627f54cab4ba669937c9f20da7743c7cae2f036019b5d179 Jan 29 15:10:37 crc kubenswrapper[4757]: E0129 15:10:37.926515 4757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="800ms" Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.154593 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.158041 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.158095 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.158119 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.158152 4757 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:10:38 crc kubenswrapper[4757]: E0129 15:10:38.159229 4757 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.319083 4757 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.321173 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:52:52.79788511 +0000 UTC Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.399749 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bd10118d1068f4bcbd5b228422e2ce801a6f6ebf2680ae47bb9912c775be08d0"} Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.401868 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2563a049dd9fa153627f54cab4ba669937c9f20da7743c7cae2f036019b5d179"} Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.402920 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a6b4b272cc69c7c91870bc54405f67d04d301f83f0e6047b7f9cd2537ebb76be"} Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.404380 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d771e811df825c6e722d97b57bca99d2556ab777722f2834f6dd2bbe43cb3c7f"} Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.405365 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ab97cb01ee26f56875cb979cb25260dc4baef770ebba0af0db0fa3cc02abf443"} Jan 29 15:10:38 crc kubenswrapper[4757]: W0129 15:10:38.468889 4757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:38 crc kubenswrapper[4757]: E0129 15:10:38.469020 4757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:38 crc kubenswrapper[4757]: W0129 15:10:38.557700 4757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:38 crc kubenswrapper[4757]: E0129 15:10:38.557783 4757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:38 crc kubenswrapper[4757]: W0129 15:10:38.565732 4757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:38 crc kubenswrapper[4757]: E0129 15:10:38.565794 4757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:38 crc kubenswrapper[4757]: W0129 15:10:38.662470 4757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:38 crc kubenswrapper[4757]: E0129 15:10:38.662572 4757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:38 crc kubenswrapper[4757]: E0129 15:10:38.727682 4757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="1.6s" Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.960311 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.962718 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.962760 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.962770 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:38 crc kubenswrapper[4757]: I0129 15:10:38.962800 4757 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:10:38 crc kubenswrapper[4757]: E0129 15:10:38.963376 4757 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.249261 4757 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 15:10:39 crc kubenswrapper[4757]: E0129 15:10:39.250174 4757 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.318812 4757 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.321936 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:26:38.850891147 +0000 UTC Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.410092 4757 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9" exitCode=0 Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.410186 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9"} Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.410232 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.411743 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.411779 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.411794 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.413202 4757 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="1d4259382242b09ba7ae725b2c14c6543a776c0108d444003ed640ee1435d61d" exitCode=0 Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.413293 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.413303 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"1d4259382242b09ba7ae725b2c14c6543a776c0108d444003ed640ee1435d61d"} Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.413766 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.414238 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.414324 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.414349 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.415794 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.415866 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.415878 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.416750 4757 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc" exitCode=0 Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.416800 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.416849 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc"} Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.417688 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.417733 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.417746 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.419166 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121"} Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.421198 4757 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="340a02db01ec89173784491769b9fbc4ae9895b042384d00b58bf7f0852d882d" exitCode=0 Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.421244 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"340a02db01ec89173784491769b9fbc4ae9895b042384d00b58bf7f0852d882d"} Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.421415 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.422255 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.422300 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:39 crc kubenswrapper[4757]: I0129 15:10:39.422312 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.318840 4757 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.322076 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:45:54.750306613 +0000 UTC Jan 29 15:10:40 crc kubenswrapper[4757]: E0129 15:10:40.328715 4757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="3.2s" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.427063 4757 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1d95f9f9ad1d75c62dd4642d07cbb58f7e7463a89b54a69df81070f31d8d9ddd" exitCode=0 Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.427146 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1d95f9f9ad1d75c62dd4642d07cbb58f7e7463a89b54a69df81070f31d8d9ddd"} Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.427222 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.428778 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.428812 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.428823 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.431432 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce"} Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.431477 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978"} Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.438198 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3b4d592b73375bbaf1446855a2bc04008aee7c0bbdacb1267827245020de1727"} Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.438231 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.439778 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.439841 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.439861 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.443609 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6396476b582de4682f63061b8c38324ea5e530d9227f520bad41f5fab1e3fa50"} Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.443684 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2d55ea7daad0c4fdedbde11ce3aefb6b535e868f6af94052fae4e04ab8cb9192"} Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.447310 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0"} Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.447378 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29"} Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.564114 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.565783 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.565829 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.565841 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:40 crc kubenswrapper[4757]: I0129 15:10:40.565868 4757 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:10:40 crc kubenswrapper[4757]: E0129 15:10:40.566406 4757 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.219:6443: connect: connection refused" node="crc" Jan 29 15:10:41 crc kubenswrapper[4757]: W0129 15:10:41.092528 4757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:41 crc kubenswrapper[4757]: E0129 15:10:41.092629 4757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:41 crc kubenswrapper[4757]: W0129 15:10:41.166743 4757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:41 crc kubenswrapper[4757]: E0129 15:10:41.166898 4757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:41 crc kubenswrapper[4757]: W0129 15:10:41.235527 4757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:41 crc kubenswrapper[4757]: E0129 15:10:41.235595 4757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.318857 4757 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:41 crc kubenswrapper[4757]: W0129 15:10:41.318876 4757 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:41 crc kubenswrapper[4757]: E0129 15:10:41.319027 4757 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.219:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.322467 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 18:26:55.309800647 +0000 UTC Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.452722 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ef9b463efd0451a165de686ba00645d799cddbf305a6f7227954dac78bbfe53e"} Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.453214 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.454104 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.454144 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.454155 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.455192 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd"} Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.455326 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.456153 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.456187 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.456199 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.457390 4757 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="869a526f2c30316cbe9821deee35ae5b73825dfe1b48947f878ac226418f668d" exitCode=0 Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.457445 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"869a526f2c30316cbe9821deee35ae5b73825dfe1b48947f878ac226418f668d"} Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.457591 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.458427 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.458516 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.458597 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.460760 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.460758 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1eda1e717c10a3060d3fa87127fb2907bf5d5686f42038ba830e5106271c7977"} Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.460812 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674"} Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.460827 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf"} Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.460989 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.461629 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.461666 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.461679 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.462387 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.462434 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:41 crc kubenswrapper[4757]: I0129 15:10:41.462451 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.276396 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.318563 4757 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.322874 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:06:29.319774428 +0000 UTC Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.470790 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f0dd997d784a2a947d3c7eb6dace8c7ecbddff303ef4aaab9020df7fe4b10867"} Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.470832 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"acf33ab0e0f71f3a19cfd55735c6ee43f44e1cd65f37fad3698ac353c7555e0b"} Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.470858 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c303ffeb46ff6cff68c0cf79ea1afa159aeb5fa128e6a3f2f6029c6db7a036d8"} Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.470878 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.470901 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.470943 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.470960 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.471643 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.471978 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.471991 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.472019 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.472029 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.472005 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.472089 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.472926 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.472951 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:42 crc kubenswrapper[4757]: I0129 15:10:42.472961 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.323296 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:56:21.724469594 +0000 UTC Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.478254 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.478303 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.478438 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.478756 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6f0fc0a2dba13d8ec6c02074931378ca8b9478b1bfa726ff6e5ca0f4bd389fe0"} Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.478806 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1d2070b4f916a6f27522b5ad5343678fd9736599dc4b88a1029569a4a1a368d4"} Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.479092 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.479114 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.479123 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.479671 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.479715 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.479724 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.479734 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.479880 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.479905 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.479918 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.481113 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.481132 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.481140 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.623003 4757 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.763540 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.766694 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.768363 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.768430 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.768452 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:43 crc kubenswrapper[4757]: I0129 15:10:43.768490 4757 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:10:44 crc kubenswrapper[4757]: I0129 15:10:44.323979 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:02:52.680429933 +0000 UTC Jan 29 15:10:44 crc kubenswrapper[4757]: I0129 15:10:44.480795 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:44 crc kubenswrapper[4757]: I0129 15:10:44.480796 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:44 crc kubenswrapper[4757]: I0129 15:10:44.481690 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:44 crc kubenswrapper[4757]: I0129 15:10:44.481727 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:44 crc kubenswrapper[4757]: I0129 15:10:44.481742 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:44 crc kubenswrapper[4757]: I0129 15:10:44.483112 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:44 crc kubenswrapper[4757]: I0129 15:10:44.483142 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:44 crc kubenswrapper[4757]: I0129 15:10:44.483154 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:45 crc kubenswrapper[4757]: I0129 15:10:45.325072 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:38:45.225002842 +0000 UTC Jan 29 15:10:45 crc kubenswrapper[4757]: I0129 15:10:45.762724 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 15:10:45 crc kubenswrapper[4757]: I0129 15:10:45.762923 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:45 crc kubenswrapper[4757]: I0129 15:10:45.764249 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:45 crc kubenswrapper[4757]: I0129 15:10:45.764328 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:45 crc kubenswrapper[4757]: I0129 15:10:45.764348 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:45 crc kubenswrapper[4757]: I0129 15:10:45.986064 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:45 crc kubenswrapper[4757]: I0129 15:10:45.986302 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:45 crc kubenswrapper[4757]: I0129 15:10:45.987318 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:45 crc kubenswrapper[4757]: I0129 15:10:45.987363 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:45 crc kubenswrapper[4757]: I0129 15:10:45.987378 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:46 crc kubenswrapper[4757]: I0129 15:10:46.325490 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:46:05.128694651 +0000 UTC Jan 29 15:10:46 crc kubenswrapper[4757]: I0129 15:10:46.829468 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 15:10:46 crc kubenswrapper[4757]: I0129 15:10:46.829640 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:46 crc kubenswrapper[4757]: I0129 15:10:46.830787 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:46 crc kubenswrapper[4757]: I0129 15:10:46.830837 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:46 crc kubenswrapper[4757]: I0129 15:10:46.830851 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:47 crc kubenswrapper[4757]: I0129 15:10:47.326635 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 11:58:10.608727668 +0000 UTC Jan 29 15:10:47 crc kubenswrapper[4757]: E0129 15:10:47.457388 4757 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 15:10:47 crc kubenswrapper[4757]: I0129 15:10:47.483815 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:47 crc kubenswrapper[4757]: I0129 15:10:47.483983 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:47 crc kubenswrapper[4757]: I0129 15:10:47.484991 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:47 crc kubenswrapper[4757]: I0129 15:10:47.485067 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:47 crc kubenswrapper[4757]: I0129 15:10:47.485084 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:47 crc kubenswrapper[4757]: I0129 15:10:47.488212 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:47 crc kubenswrapper[4757]: I0129 15:10:47.489950 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:47 crc kubenswrapper[4757]: I0129 15:10:47.490044 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:47 crc kubenswrapper[4757]: I0129 15:10:47.490703 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:47 crc kubenswrapper[4757]: I0129 15:10:47.490743 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:47 crc kubenswrapper[4757]: I0129 15:10:47.490759 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:48 crc kubenswrapper[4757]: I0129 15:10:48.327612 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:00:11.619372137 +0000 UTC Jan 29 15:10:48 crc kubenswrapper[4757]: I0129 15:10:48.492367 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:48 crc kubenswrapper[4757]: I0129 15:10:48.493320 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:48 crc kubenswrapper[4757]: I0129 15:10:48.493360 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:48 crc kubenswrapper[4757]: I0129 15:10:48.493373 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:48 crc kubenswrapper[4757]: I0129 15:10:48.497206 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:49 crc kubenswrapper[4757]: I0129 15:10:49.328335 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:41:05.821389728 +0000 UTC Jan 29 15:10:49 crc kubenswrapper[4757]: I0129 15:10:49.494944 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:49 crc kubenswrapper[4757]: I0129 15:10:49.495891 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:49 crc kubenswrapper[4757]: I0129 15:10:49.495936 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:49 crc kubenswrapper[4757]: I0129 15:10:49.495946 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:50 crc kubenswrapper[4757]: I0129 15:10:50.329104 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 07:23:41.165754428 +0000 UTC Jan 29 15:10:50 crc kubenswrapper[4757]: I0129 15:10:50.390555 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:10:50 crc kubenswrapper[4757]: I0129 15:10:50.497314 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:50 crc kubenswrapper[4757]: I0129 15:10:50.498051 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:50 crc kubenswrapper[4757]: I0129 15:10:50.498087 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:50 crc kubenswrapper[4757]: I0129 15:10:50.498096 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:51 crc kubenswrapper[4757]: I0129 15:10:51.330163 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 19:12:45.875050257 +0000 UTC Jan 29 15:10:52 crc kubenswrapper[4757]: I0129 15:10:52.330355 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:37:01.881852124 +0000 UTC Jan 29 15:10:52 crc kubenswrapper[4757]: I0129 15:10:52.674228 4757 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54414->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 15:10:52 crc kubenswrapper[4757]: I0129 15:10:52.674349 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54414->192.168.126.11:17697: read: connection reset by peer" Jan 29 15:10:52 crc kubenswrapper[4757]: I0129 15:10:52.739376 4757 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 15:10:52 crc kubenswrapper[4757]: I0129 15:10:52.739807 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 15:10:52 crc kubenswrapper[4757]: I0129 15:10:52.746728 4757 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 15:10:52 crc kubenswrapper[4757]: I0129 15:10:52.746799 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 15:10:53 crc kubenswrapper[4757]: I0129 15:10:53.124614 4757 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 15:10:53 crc kubenswrapper[4757]: I0129 15:10:53.124675 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 15:10:53 crc kubenswrapper[4757]: I0129 15:10:53.384631 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:50:08.627104433 +0000 UTC Jan 29 15:10:53 crc kubenswrapper[4757]: I0129 15:10:53.390840 4757 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 15:10:53 crc kubenswrapper[4757]: I0129 15:10:53.391078 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 15:10:53 crc kubenswrapper[4757]: I0129 15:10:53.507297 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 15:10:53 crc kubenswrapper[4757]: I0129 15:10:53.509716 4757 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1eda1e717c10a3060d3fa87127fb2907bf5d5686f42038ba830e5106271c7977" exitCode=255 Jan 29 15:10:53 crc kubenswrapper[4757]: I0129 15:10:53.509775 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1eda1e717c10a3060d3fa87127fb2907bf5d5686f42038ba830e5106271c7977"} Jan 29 15:10:53 crc kubenswrapper[4757]: I0129 15:10:53.510004 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:53 crc kubenswrapper[4757]: I0129 15:10:53.511228 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:53 crc kubenswrapper[4757]: I0129 15:10:53.511344 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:53 crc kubenswrapper[4757]: I0129 15:10:53.511369 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:53 crc kubenswrapper[4757]: I0129 15:10:53.512351 4757 scope.go:117] "RemoveContainer" containerID="1eda1e717c10a3060d3fa87127fb2907bf5d5686f42038ba830e5106271c7977" Jan 29 15:10:54 crc kubenswrapper[4757]: I0129 15:10:54.385320 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:46:33.453202541 +0000 UTC Jan 29 15:10:54 crc kubenswrapper[4757]: I0129 15:10:54.514794 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 15:10:54 crc kubenswrapper[4757]: I0129 15:10:54.517223 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08"} Jan 29 15:10:54 crc kubenswrapper[4757]: I0129 15:10:54.517464 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:54 crc kubenswrapper[4757]: I0129 15:10:54.518403 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:54 crc kubenswrapper[4757]: I0129 15:10:54.518434 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:54 crc kubenswrapper[4757]: I0129 15:10:54.518445 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:55 crc kubenswrapper[4757]: I0129 15:10:55.385859 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:37:58.15503334 +0000 UTC Jan 29 15:10:55 crc kubenswrapper[4757]: I0129 15:10:55.804510 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 15:10:55 crc kubenswrapper[4757]: I0129 15:10:55.804714 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:55 crc kubenswrapper[4757]: I0129 15:10:55.806017 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:55 crc kubenswrapper[4757]: I0129 15:10:55.806083 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:55 crc kubenswrapper[4757]: I0129 15:10:55.806097 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:55 crc kubenswrapper[4757]: I0129 15:10:55.818913 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 15:10:55 crc kubenswrapper[4757]: I0129 15:10:55.994300 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:55 crc kubenswrapper[4757]: I0129 15:10:55.994474 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:55 crc kubenswrapper[4757]: I0129 15:10:55.994623 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:55 crc kubenswrapper[4757]: I0129 15:10:55.995712 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:55 crc kubenswrapper[4757]: I0129 15:10:55.995768 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:55 crc kubenswrapper[4757]: I0129 15:10:55.995784 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:56 crc kubenswrapper[4757]: I0129 15:10:56.000315 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:10:56 crc kubenswrapper[4757]: I0129 15:10:56.386880 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:15:40.484831971 +0000 UTC Jan 29 15:10:56 crc kubenswrapper[4757]: I0129 15:10:56.523185 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:56 crc kubenswrapper[4757]: I0129 15:10:56.523258 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:56 crc kubenswrapper[4757]: I0129 15:10:56.524542 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:56 crc kubenswrapper[4757]: I0129 15:10:56.524584 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:56 crc kubenswrapper[4757]: I0129 15:10:56.524597 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:56 crc kubenswrapper[4757]: I0129 15:10:56.524756 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:56 crc kubenswrapper[4757]: I0129 15:10:56.524823 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:56 crc kubenswrapper[4757]: I0129 15:10:56.524839 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.387080 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:12:46.804488387 +0000 UTC Jan 29 15:10:57 crc kubenswrapper[4757]: E0129 15:10:57.457475 4757 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.526039 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.529385 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.529443 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.529457 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:10:57 crc kubenswrapper[4757]: E0129 15:10:57.727923 4757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.733304 4757 trace.go:236] Trace[1925978512]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:10:46.045) (total time: 11687ms): Jan 29 15:10:57 crc kubenswrapper[4757]: Trace[1925978512]: ---"Objects listed" error: 11687ms (15:10:57.733) Jan 29 15:10:57 crc kubenswrapper[4757]: Trace[1925978512]: [11.687611989s] [11.687611989s] END Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.733337 4757 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 15:10:57 crc kubenswrapper[4757]: E0129 15:10:57.733387 4757 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.736900 4757 trace.go:236] Trace[132468267]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:10:46.401) (total time: 11335ms): Jan 29 15:10:57 crc kubenswrapper[4757]: Trace[132468267]: ---"Objects listed" error: 11335ms (15:10:57.736) Jan 29 15:10:57 crc kubenswrapper[4757]: Trace[132468267]: [11.335370784s] [11.335370784s] END Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.736933 4757 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.737043 4757 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.737493 4757 trace.go:236] Trace[643173100]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:10:45.739) (total time: 11997ms): Jan 29 15:10:57 crc kubenswrapper[4757]: Trace[643173100]: ---"Objects listed" error: 11997ms (15:10:57.737) Jan 29 15:10:57 crc kubenswrapper[4757]: Trace[643173100]: [11.997694979s] [11.997694979s] END Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.737516 4757 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.737676 4757 trace.go:236] Trace[1879542180]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:10:46.307) (total time: 11429ms): Jan 29 15:10:57 crc kubenswrapper[4757]: Trace[1879542180]: ---"Objects listed" error: 11429ms (15:10:57.737) Jan 29 15:10:57 crc kubenswrapper[4757]: Trace[1879542180]: [11.429904207s] [11.429904207s] END Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.737693 4757 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.772416 4757 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.794161 4757 csr.go:261] certificate signing request csr-w6pjt is approved, waiting to be issued Jan 29 15:10:57 crc kubenswrapper[4757]: I0129 15:10:57.806496 4757 csr.go:257] certificate signing request csr-w6pjt is issued Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.317105 4757 apiserver.go:52] "Watching apiserver" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.333863 4757 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.334172 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-rmlkd","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.334612 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.334661 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.334677 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.334689 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.334756 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.334862 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.335506 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rmlkd" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.335719 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.335756 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.335988 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.336648 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.337747 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.338206 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.338255 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.338964 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.339002 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.339221 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.339475 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.339547 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.339555 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.339618 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.339826 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.355507 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.368898 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.380976 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.388080 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 09:54:43.676623837 +0000 UTC Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.390544 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.405813 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.414656 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.424808 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.429301 4757 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.436805 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440323 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440376 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440402 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440435 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440459 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440479 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440500 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440522 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440543 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440563 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440585 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440610 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440632 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440655 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440675 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440695 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440714 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440737 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440758 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440779 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440800 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440822 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440841 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440860 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440880 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440902 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440922 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440991 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441020 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441044 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441067 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441089 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441112 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441134 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441160 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441180 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441203 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441231 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441249 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441288 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441318 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441339 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441363 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441389 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441413 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441437 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441460 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441484 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441509 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441535 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441562 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441580 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441597 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441618 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441636 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441650 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441665 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441680 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441695 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441712 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441729 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441745 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441764 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441786 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441807 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441824 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441839 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441855 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441872 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441889 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441905 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441920 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441935 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441953 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441969 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441985 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442000 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442021 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442043 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442059 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442074 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442089 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442106 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442122 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442139 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442155 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442171 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442188 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442207 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442223 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442240 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442256 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442290 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442325 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442341 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442356 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442373 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442389 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442406 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442421 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442436 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442451 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442468 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442483 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442500 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442515 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442531 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442547 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.440869 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442563 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442561 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.443021 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.443053 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.443079 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.443294 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.443488 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.443765 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441342 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441496 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441541 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441685 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441739 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441859 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441911 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441924 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441989 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.441996 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442100 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442292 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442358 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442457 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.442551 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.444100 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.444315 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.444541 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.444566 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.444733 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.444913 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.445104 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.445119 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.445149 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.445175 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.445197 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.445222 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.445244 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.445281 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.445306 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.445357 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.445387 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.445536 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.445881 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.446140 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.446658 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.446833 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.446870 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.446892 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.446919 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.446954 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.446979 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447013 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447038 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447061 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447085 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447107 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447131 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447155 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447177 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447190 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447199 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447312 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447338 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447388 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447406 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447448 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447476 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447499 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447551 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447578 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447623 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447663 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447649 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447715 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447738 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447757 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447783 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447801 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447818 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447837 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447857 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447883 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.447981 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448007 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448026 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448042 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448061 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448079 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448096 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448114 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448132 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448149 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448166 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448183 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448202 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448218 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448243 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448274 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448299 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448317 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448342 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448360 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448367 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448380 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448491 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448523 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448574 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448603 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448639 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448647 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448658 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448680 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448731 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448760 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448756 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448787 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448840 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448850 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448892 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448922 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448950 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.448998 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449027 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449076 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449102 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449152 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449227 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/10107436-84cd-4f7f-8f92-2a403cdfe4e9-hosts-file\") pod \"node-resolver-rmlkd\" (UID: \"10107436-84cd-4f7f-8f92-2a403cdfe4e9\") " pod="openshift-dns/node-resolver-rmlkd" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449253 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449290 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449325 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449375 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449402 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449448 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449466 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449478 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449524 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449551 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449574 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449577 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449623 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449653 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449702 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449730 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449736 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449849 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449917 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.449967 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxxg9\" (UniqueName: \"kubernetes.io/projected/10107436-84cd-4f7f-8f92-2a403cdfe4e9-kube-api-access-qxxg9\") pod \"node-resolver-rmlkd\" (UID: \"10107436-84cd-4f7f-8f92-2a403cdfe4e9\") " pod="openshift-dns/node-resolver-rmlkd" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.450027 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.450195 4757 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.450217 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.450615 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.450616 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.450825 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.450985 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.451231 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.451496 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.451526 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.451609 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.451984 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.452015 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.452118 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.452286 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.453014 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.452992 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.453204 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.453297 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.453337 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.453500 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.453518 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.453731 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.453936 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.453962 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.454143 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.454342 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.454357 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.454439 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.454888 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.454895 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.454961 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.454961 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.455002 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:10:58.954977593 +0000 UTC m=+22.244227910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.455483 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.455735 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.455902 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.456167 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.456054 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.456426 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.456466 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.456598 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.456784 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.456888 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.456967 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.457083 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.457322 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.457336 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.457357 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.457446 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.457708 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.457775 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.458074 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.458107 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.458240 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.461005 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.461218 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.462370 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.462842 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.463203 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.463577 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.464040 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.464315 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.464399 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.464524 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.464589 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.465931 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.466305 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.466370 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.466397 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.466430 4757 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.466499 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.466712 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.466894 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.467972 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.468096 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.468396 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.468458 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.468539 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.468919 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.469105 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.469141 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.450835 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.469347 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.469210 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.469383 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.469649 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.469676 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.469896 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.469946 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.470028 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.470375 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.470646 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.470796 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.470831 4757 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471093 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471099 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471238 4757 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471291 4757 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471315 4757 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471337 4757 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471354 4757 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471369 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471384 4757 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471412 4757 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471431 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471445 4757 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471460 4757 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471474 4757 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471488 4757 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471502 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471515 4757 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471528 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471541 4757 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471552 4757 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471564 4757 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471576 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471588 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471602 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471616 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471626 4757 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471637 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471647 4757 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471658 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471668 4757 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471678 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471688 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471698 4757 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471706 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471967 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.471978 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.472007 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.472975 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.473118 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.475411 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.475759 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.475882 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.475909 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.476163 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.476328 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.476436 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.476471 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.476668 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.476678 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.476872 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.477006 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.477099 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.477482 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.477695 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.477901 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.488491 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.489163 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.489387 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.489729 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.489825 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.490306 4757 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.490508 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.490546 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.490591 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.490654 4757 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.490770 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:10:58.990744181 +0000 UTC m=+22.279994408 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.490975 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:10:58.990926647 +0000 UTC m=+22.280176884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.491918 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.493982 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.494417 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.494467 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.494518 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.495619 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.496597 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.497890 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.500385 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.498057 4757 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.502447 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.502524 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.502714 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.502821 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.502934 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.503505 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.503582 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.511371 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.515237 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.515340 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.515415 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.516125 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.518351 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.518633 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.518673 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.518932 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.519104 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.519222 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.519235 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.519249 4757 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.519307 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:10:59.01929134 +0000 UTC m=+22.308541577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.519335 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.519345 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.519354 4757 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.519375 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:10:59.019368902 +0000 UTC m=+22.308619139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.523213 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.529356 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.532528 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.533960 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.534259 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.545173 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.552462 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.553484 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.558259 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.572878 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573168 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573210 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxxg9\" (UniqueName: \"kubernetes.io/projected/10107436-84cd-4f7f-8f92-2a403cdfe4e9-kube-api-access-qxxg9\") pod \"node-resolver-rmlkd\" (UID: \"10107436-84cd-4f7f-8f92-2a403cdfe4e9\") " pod="openshift-dns/node-resolver-rmlkd" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573227 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573247 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/10107436-84cd-4f7f-8f92-2a403cdfe4e9-hosts-file\") pod \"node-resolver-rmlkd\" (UID: \"10107436-84cd-4f7f-8f92-2a403cdfe4e9\") " pod="openshift-dns/node-resolver-rmlkd" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573306 4757 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573317 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573325 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573333 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573342 4757 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573350 4757 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573359 4757 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573367 4757 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573375 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573384 4757 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573393 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573402 4757 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573410 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573418 4757 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573426 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573435 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573445 4757 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573456 4757 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573464 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573473 4757 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573482 4757 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573492 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573502 4757 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573511 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573518 4757 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573527 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573535 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573520 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573544 4757 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573608 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573623 4757 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573637 4757 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573653 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573666 4757 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573678 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573681 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/10107436-84cd-4f7f-8f92-2a403cdfe4e9-hosts-file\") pod \"node-resolver-rmlkd\" (UID: \"10107436-84cd-4f7f-8f92-2a403cdfe4e9\") " pod="openshift-dns/node-resolver-rmlkd" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573689 4757 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573706 4757 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573718 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573727 4757 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573736 4757 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573745 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573754 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573764 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573773 4757 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573782 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573790 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573797 4757 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573819 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573827 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573834 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573842 4757 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573851 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573859 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573867 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573875 4757 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573883 4757 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573891 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573899 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573906 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573914 4757 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573922 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573932 4757 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573942 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573951 4757 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573959 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573966 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573976 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573985 4757 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.573994 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574003 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574014 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574022 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574032 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574041 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574051 4757 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574059 4757 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574068 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574076 4757 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574084 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574095 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574104 4757 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574112 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574121 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574131 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574139 4757 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574142 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574148 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574157 4757 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574168 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574177 4757 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574184 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574192 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574200 4757 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574209 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574218 4757 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574227 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574237 4757 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574246 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574255 4757 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574277 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574285 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574295 4757 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574304 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574312 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574323 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574331 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574342 4757 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574350 4757 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574119 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574358 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574422 4757 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574434 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574446 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574456 4757 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574465 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574474 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574484 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574493 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574502 4757 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574511 4757 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574521 4757 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574533 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574545 4757 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574557 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574570 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574582 4757 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574593 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574604 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574620 4757 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574633 4757 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574643 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574653 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574663 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574672 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574683 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574692 4757 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574702 4757 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574713 4757 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574725 4757 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574735 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574748 4757 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574760 4757 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574770 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574778 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574787 4757 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574796 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574805 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574814 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574822 4757 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574848 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574857 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574868 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574877 4757 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574885 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574894 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574904 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574913 4757 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.574924 4757 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.578195 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.593199 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.600460 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxxg9\" (UniqueName: \"kubernetes.io/projected/10107436-84cd-4f7f-8f92-2a403cdfe4e9-kube-api-access-qxxg9\") pod \"node-resolver-rmlkd\" (UID: \"10107436-84cd-4f7f-8f92-2a403cdfe4e9\") " pod="openshift-dns/node-resolver-rmlkd" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.651490 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.658769 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:10:58 crc kubenswrapper[4757]: W0129 15:10:58.662138 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-41bb21f4d360f16357babffd4419bec3ad3264ea4490e8dfc33736c7dd25149b WatchSource:0}: Error finding container 41bb21f4d360f16357babffd4419bec3ad3264ea4490e8dfc33736c7dd25149b: Status 404 returned error can't find the container with id 41bb21f4d360f16357babffd4419bec3ad3264ea4490e8dfc33736c7dd25149b Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.667248 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rmlkd" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.674098 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.676049 4757 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.676079 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.676092 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.808292 4757 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 15:05:57 +0000 UTC, rotation deadline is 2026-11-22 21:44:09.451805145 +0000 UTC Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.808698 4757 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7134h33m10.643110271s for next certificate rotation Jan 29 15:10:58 crc kubenswrapper[4757]: I0129 15:10:58.978492 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:10:58 crc kubenswrapper[4757]: E0129 15:10:58.978688 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:10:59.97865775 +0000 UTC m=+23.267907987 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.079877 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.079927 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.079952 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.079980 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:10:59 crc kubenswrapper[4757]: E0129 15:10:59.080023 4757 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:10:59 crc kubenswrapper[4757]: E0129 15:10:59.080054 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:10:59 crc kubenswrapper[4757]: E0129 15:10:59.080076 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:10:59 crc kubenswrapper[4757]: E0129 15:10:59.080088 4757 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:10:59 crc kubenswrapper[4757]: E0129 15:10:59.080090 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:00.080071685 +0000 UTC m=+23.369321922 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:10:59 crc kubenswrapper[4757]: E0129 15:10:59.080091 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:10:59 crc kubenswrapper[4757]: E0129 15:10:59.080116 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:10:59 crc kubenswrapper[4757]: E0129 15:10:59.080122 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:00.080111136 +0000 UTC m=+23.369361373 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:10:59 crc kubenswrapper[4757]: E0129 15:10:59.080126 4757 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:10:59 crc kubenswrapper[4757]: E0129 15:10:59.080173 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:00.080162437 +0000 UTC m=+23.369412674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:10:59 crc kubenswrapper[4757]: E0129 15:10:59.080056 4757 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:10:59 crc kubenswrapper[4757]: E0129 15:10:59.080207 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:00.080199648 +0000 UTC m=+23.369449885 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.388309 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:08:00.310333914 +0000 UTC Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.399784 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.400432 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.401428 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.402211 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.403770 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.404369 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.405539 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.406219 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.407556 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.408181 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.409261 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.410033 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.412498 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.413181 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.413934 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.415792 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.416644 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.418496 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.420527 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.421306 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.421939 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.424735 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.425351 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.427742 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.428486 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.430750 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.433779 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.434387 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.435519 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.436156 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.436736 4757 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.436867 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.440331 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.440913 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.442606 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.446514 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.450762 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.451411 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.452674 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.453326 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.454183 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.454854 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.455857 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.457063 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.457582 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.458218 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.459425 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.460284 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.461379 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.461927 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.462940 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.463639 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.464203 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.465183 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.539093 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f"} Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.539174 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"41bb21f4d360f16357babffd4419bec3ad3264ea4490e8dfc33736c7dd25149b"} Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.541146 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7eb22f68b946d2fa817bbc4c8e9c74176e0cfd41a5c90a6a5a4b263e56b3e2c6"} Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.542548 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rmlkd" event={"ID":"10107436-84cd-4f7f-8f92-2a403cdfe4e9","Type":"ContainerStarted","Data":"9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4"} Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.542579 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rmlkd" event={"ID":"10107436-84cd-4f7f-8f92-2a403cdfe4e9","Type":"ContainerStarted","Data":"1e32f8587262d650ff11c452184796e7ba6a1c66300a81bcbf517e465b54de8b"} Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.544602 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb"} Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.544644 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a"} Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.544659 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"464a1898e308c490c453a460491aa7222054278459b898ba6856b01d5f9d1deb"} Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.559078 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.574775 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.588999 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.600351 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.610094 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.623312 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.630438 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.641304 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.653589 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.663601 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.677032 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.689247 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.703369 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.713756 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.965016 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qxr9t"] Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.965524 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qxr9t" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.967122 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.967992 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.970135 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.970401 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.986862 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.988124 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:10:59 crc kubenswrapper[4757]: E0129 15:10:59.988882 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:11:01.988815504 +0000 UTC m=+25.278065751 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.993491 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-bcbdt"] Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.994035 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bcbdt" Jan 29 15:10:59 crc kubenswrapper[4757]: I0129 15:10:59.999166 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.003343 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.005341 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.005478 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.005711 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.013099 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.022653 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.031840 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.041680 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.050363 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.062516 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.072460 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.089528 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fe6866d7-5a43-46d5-ba84-264847f9cd30-multus-daemon-config\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.089573 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-run-multus-certs\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.089606 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.089628 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-run-k8s-cni-cncf-io\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.089652 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-var-lib-cni-multus\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.089670 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-var-lib-kubelet\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.089677 4757 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.089697 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.089719 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdqjx\" (UniqueName: \"kubernetes.io/projected/fe6866d7-5a43-46d5-ba84-264847f9cd30-kube-api-access-pdqjx\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.089744 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:02.089726364 +0000 UTC m=+25.378976601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.089765 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.089826 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.089837 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.089846 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.089851 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.089861 4757 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.089861 4757 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.089833 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3ac5eae5-5794-458e-b182-a3203b6638d1-serviceca\") pod \"node-ca-qxr9t\" (UID: \"3ac5eae5-5794-458e-b182-a3203b6638d1\") " pod="openshift-image-registry/node-ca-qxr9t" Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.089896 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:02.089885519 +0000 UTC m=+25.379135816 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.089930 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:02.08992137 +0000 UTC m=+25.379171717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.089945 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h4kv\" (UniqueName: \"kubernetes.io/projected/3ac5eae5-5794-458e-b182-a3203b6638d1-kube-api-access-2h4kv\") pod \"node-ca-qxr9t\" (UID: \"3ac5eae5-5794-458e-b182-a3203b6638d1\") " pod="openshift-image-registry/node-ca-qxr9t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.089967 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-hostroot\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.089986 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-etc-kubernetes\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.090036 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-os-release\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.090057 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-multus-socket-dir-parent\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.090078 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-cnibin\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.090096 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fe6866d7-5a43-46d5-ba84-264847f9cd30-cni-binary-copy\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.090114 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-run-netns\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.090134 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-var-lib-cni-bin\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.090177 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.090202 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-system-cni-dir\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.090219 4757 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.090223 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-multus-cni-dir\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.090253 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:02.09024367 +0000 UTC m=+25.379493907 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.090283 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ac5eae5-5794-458e-b182-a3203b6638d1-host\") pod \"node-ca-qxr9t\" (UID: \"3ac5eae5-5794-458e-b182-a3203b6638d1\") " pod="openshift-image-registry/node-ca-qxr9t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.090305 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-multus-conf-dir\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.090633 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.105036 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.122018 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.130541 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.141955 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.150149 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.160008 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.169812 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.184496 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.190848 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fe6866d7-5a43-46d5-ba84-264847f9cd30-multus-daemon-config\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.190894 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-run-multus-certs\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.190927 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-var-lib-cni-multus\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.190945 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-var-lib-kubelet\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.190980 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-run-k8s-cni-cncf-io\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.190995 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdqjx\" (UniqueName: \"kubernetes.io/projected/fe6866d7-5a43-46d5-ba84-264847f9cd30-kube-api-access-pdqjx\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191010 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3ac5eae5-5794-458e-b182-a3203b6638d1-serviceca\") pod \"node-ca-qxr9t\" (UID: \"3ac5eae5-5794-458e-b182-a3203b6638d1\") " pod="openshift-image-registry/node-ca-qxr9t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191034 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h4kv\" (UniqueName: \"kubernetes.io/projected/3ac5eae5-5794-458e-b182-a3203b6638d1-kube-api-access-2h4kv\") pod \"node-ca-qxr9t\" (UID: \"3ac5eae5-5794-458e-b182-a3203b6638d1\") " pod="openshift-image-registry/node-ca-qxr9t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191049 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-hostroot\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191051 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-var-lib-kubelet\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191087 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-etc-kubernetes\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191064 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-etc-kubernetes\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191118 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-os-release\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191137 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-multus-socket-dir-parent\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191152 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-run-netns\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191166 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-var-lib-cni-bin\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191182 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-cnibin\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191196 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fe6866d7-5a43-46d5-ba84-264847f9cd30-cni-binary-copy\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191211 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-system-cni-dir\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191239 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-multus-socket-dir-parent\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191048 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-run-k8s-cni-cncf-io\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191225 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-multus-cni-dir\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191379 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ac5eae5-5794-458e-b182-a3203b6638d1-host\") pod \"node-ca-qxr9t\" (UID: \"3ac5eae5-5794-458e-b182-a3203b6638d1\") " pod="openshift-image-registry/node-ca-qxr9t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191400 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-multus-conf-dir\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191427 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-multus-cni-dir\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191455 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-multus-conf-dir\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191457 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-run-netns\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191473 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-var-lib-cni-bin\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191502 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-hostroot\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191510 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-cnibin\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191526 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3ac5eae5-5794-458e-b182-a3203b6638d1-host\") pod \"node-ca-qxr9t\" (UID: \"3ac5eae5-5794-458e-b182-a3203b6638d1\") " pod="openshift-image-registry/node-ca-qxr9t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191608 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-system-cni-dir\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191633 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-var-lib-cni-multus\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191697 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-os-release\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191738 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fe6866d7-5a43-46d5-ba84-264847f9cd30-host-run-multus-certs\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.191766 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fe6866d7-5a43-46d5-ba84-264847f9cd30-multus-daemon-config\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.192292 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fe6866d7-5a43-46d5-ba84-264847f9cd30-cni-binary-copy\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.192336 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3ac5eae5-5794-458e-b182-a3203b6638d1-serviceca\") pod \"node-ca-qxr9t\" (UID: \"3ac5eae5-5794-458e-b182-a3203b6638d1\") " pod="openshift-image-registry/node-ca-qxr9t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.215128 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdqjx\" (UniqueName: \"kubernetes.io/projected/fe6866d7-5a43-46d5-ba84-264847f9cd30-kube-api-access-pdqjx\") pod \"multus-bcbdt\" (UID: \"fe6866d7-5a43-46d5-ba84-264847f9cd30\") " pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.223989 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h4kv\" (UniqueName: \"kubernetes.io/projected/3ac5eae5-5794-458e-b182-a3203b6638d1-kube-api-access-2h4kv\") pod \"node-ca-qxr9t\" (UID: \"3ac5eae5-5794-458e-b182-a3203b6638d1\") " pod="openshift-image-registry/node-ca-qxr9t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.279103 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qxr9t" Jan 29 15:11:00 crc kubenswrapper[4757]: W0129 15:11:00.302573 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ac5eae5_5794_458e_b182_a3203b6638d1.slice/crio-d7165829f4de29a8d7991a5c33048a0345ecd1f71e79a3f7282b801b0911cd77 WatchSource:0}: Error finding container d7165829f4de29a8d7991a5c33048a0345ecd1f71e79a3f7282b801b0911cd77: Status 404 returned error can't find the container with id d7165829f4de29a8d7991a5c33048a0345ecd1f71e79a3f7282b801b0911cd77 Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.309341 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bcbdt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.389333 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:28:43.174080014 +0000 UTC Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.395861 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.395880 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.395888 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.395990 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.396079 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.396165 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.399383 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.414684 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.421132 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.432161 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.446248 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.455557 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.464676 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.477583 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.518935 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.547124 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qxr9t" event={"ID":"3ac5eae5-5794-458e-b182-a3203b6638d1","Type":"ContainerStarted","Data":"d7165829f4de29a8d7991a5c33048a0345ecd1f71e79a3f7282b801b0911cd77"} Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.548527 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bcbdt" event={"ID":"fe6866d7-5a43-46d5-ba84-264847f9cd30","Type":"ContainerStarted","Data":"8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2"} Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.548578 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bcbdt" event={"ID":"fe6866d7-5a43-46d5-ba84-264847f9cd30","Type":"ContainerStarted","Data":"d1d9258259f36646a531c7f80722b330c2d703f97dff63b2696da4c0eed71554"} Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.550139 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.550589 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.552109 4757 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08" exitCode=255 Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.552195 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08"} Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.552289 4757 scope.go:117] "RemoveContainer" containerID="1eda1e717c10a3060d3fa87127fb2907bf5d5686f42038ba830e5106271c7977" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.571684 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.606980 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.628467 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.648483 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.684835 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.685073 4757 scope.go:117] "RemoveContainer" containerID="1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08" Jan 29 15:11:00 crc kubenswrapper[4757]: E0129 15:11:00.685307 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.689731 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.720818 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.738011 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.755054 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.766637 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-45q8t"] Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.767113 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.769668 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.769856 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.772378 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dxk67"] Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.773052 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.773770 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.777463 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.777679 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.777700 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.778001 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.796290 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.811204 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.826418 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.844004 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.856509 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.875796 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.887988 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eda1e717c10a3060d3fa87127fb2907bf5d5686f42038ba830e5106271c7977\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:52Z\\\",\\\"message\\\":\\\"W0129 15:10:41.834292 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:10:41.834677 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769699441 cert, and key in /tmp/serving-cert-584024862/serving-signer.crt, /tmp/serving-cert-584024862/serving-signer.key\\\\nI0129 15:10:42.075370 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:10:42.076714 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:10:42.076833 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:42.080940 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-584024862/tls.crt::/tmp/serving-cert-584024862/tls.key\\\\\\\"\\\\nF0129 15:10:52.667593 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.896919 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f453676a-fbf0-4159-8a5a-04c0138b42c1-mcd-auth-proxy-config\") pod \"machine-config-daemon-45q8t\" (UID: \"f453676a-fbf0-4159-8a5a-04c0138b42c1\") " pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.896954 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ad19a70-dd88-4323-b98b-ae01159e0c64-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.896983 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f453676a-fbf0-4159-8a5a-04c0138b42c1-rootfs\") pod \"machine-config-daemon-45q8t\" (UID: \"f453676a-fbf0-4159-8a5a-04c0138b42c1\") " pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.897025 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2ad19a70-dd88-4323-b98b-ae01159e0c64-os-release\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.897049 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f453676a-fbf0-4159-8a5a-04c0138b42c1-proxy-tls\") pod \"machine-config-daemon-45q8t\" (UID: \"f453676a-fbf0-4159-8a5a-04c0138b42c1\") " pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.897072 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgr8f\" (UniqueName: \"kubernetes.io/projected/f453676a-fbf0-4159-8a5a-04c0138b42c1-kube-api-access-tgr8f\") pod \"machine-config-daemon-45q8t\" (UID: \"f453676a-fbf0-4159-8a5a-04c0138b42c1\") " pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.897086 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2ad19a70-dd88-4323-b98b-ae01159e0c64-system-cni-dir\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.897107 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2ad19a70-dd88-4323-b98b-ae01159e0c64-cnibin\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.897139 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2ad19a70-dd88-4323-b98b-ae01159e0c64-cni-binary-copy\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.897161 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ad19a70-dd88-4323-b98b-ae01159e0c64-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.897183 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7xg7\" (UniqueName: \"kubernetes.io/projected/2ad19a70-dd88-4323-b98b-ae01159e0c64-kube-api-access-t7xg7\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.898375 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.909896 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.920376 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.928838 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.939523 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.951820 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.961702 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.971559 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.982524 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.996622 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:00Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.997845 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f453676a-fbf0-4159-8a5a-04c0138b42c1-proxy-tls\") pod \"machine-config-daemon-45q8t\" (UID: \"f453676a-fbf0-4159-8a5a-04c0138b42c1\") " pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.997903 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgr8f\" (UniqueName: \"kubernetes.io/projected/f453676a-fbf0-4159-8a5a-04c0138b42c1-kube-api-access-tgr8f\") pod \"machine-config-daemon-45q8t\" (UID: \"f453676a-fbf0-4159-8a5a-04c0138b42c1\") " pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.997950 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2ad19a70-dd88-4323-b98b-ae01159e0c64-system-cni-dir\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.997972 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2ad19a70-dd88-4323-b98b-ae01159e0c64-cnibin\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.998038 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2ad19a70-dd88-4323-b98b-ae01159e0c64-system-cni-dir\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.997991 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ad19a70-dd88-4323-b98b-ae01159e0c64-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.998096 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7xg7\" (UniqueName: \"kubernetes.io/projected/2ad19a70-dd88-4323-b98b-ae01159e0c64-kube-api-access-t7xg7\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.998137 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2ad19a70-dd88-4323-b98b-ae01159e0c64-cnibin\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.998231 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2ad19a70-dd88-4323-b98b-ae01159e0c64-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.998241 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2ad19a70-dd88-4323-b98b-ae01159e0c64-cni-binary-copy\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.998319 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f453676a-fbf0-4159-8a5a-04c0138b42c1-mcd-auth-proxy-config\") pod \"machine-config-daemon-45q8t\" (UID: \"f453676a-fbf0-4159-8a5a-04c0138b42c1\") " pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.998357 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ad19a70-dd88-4323-b98b-ae01159e0c64-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.998402 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f453676a-fbf0-4159-8a5a-04c0138b42c1-rootfs\") pod \"machine-config-daemon-45q8t\" (UID: \"f453676a-fbf0-4159-8a5a-04c0138b42c1\") " pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.998430 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2ad19a70-dd88-4323-b98b-ae01159e0c64-os-release\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.998470 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f453676a-fbf0-4159-8a5a-04c0138b42c1-rootfs\") pod \"machine-config-daemon-45q8t\" (UID: \"f453676a-fbf0-4159-8a5a-04c0138b42c1\") " pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.998654 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2ad19a70-dd88-4323-b98b-ae01159e0c64-os-release\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.999025 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2ad19a70-dd88-4323-b98b-ae01159e0c64-cni-binary-copy\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.999091 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2ad19a70-dd88-4323-b98b-ae01159e0c64-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:00 crc kubenswrapper[4757]: I0129 15:11:00.999561 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f453676a-fbf0-4159-8a5a-04c0138b42c1-mcd-auth-proxy-config\") pod \"machine-config-daemon-45q8t\" (UID: \"f453676a-fbf0-4159-8a5a-04c0138b42c1\") " pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.010077 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f453676a-fbf0-4159-8a5a-04c0138b42c1-proxy-tls\") pod \"machine-config-daemon-45q8t\" (UID: \"f453676a-fbf0-4159-8a5a-04c0138b42c1\") " pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.015645 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.016919 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgr8f\" (UniqueName: \"kubernetes.io/projected/f453676a-fbf0-4159-8a5a-04c0138b42c1-kube-api-access-tgr8f\") pod \"machine-config-daemon-45q8t\" (UID: \"f453676a-fbf0-4159-8a5a-04c0138b42c1\") " pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.025791 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7xg7\" (UniqueName: \"kubernetes.io/projected/2ad19a70-dd88-4323-b98b-ae01159e0c64-kube-api-access-t7xg7\") pod \"multus-additional-cni-plugins-dxk67\" (UID: \"2ad19a70-dd88-4323-b98b-ae01159e0c64\") " pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.080484 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.086787 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dxk67" Jan 29 15:11:01 crc kubenswrapper[4757]: W0129 15:11:01.101735 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ad19a70_dd88_4323_b98b_ae01159e0c64.slice/crio-fccc39fb358a4fd1f48b4313ad62e81339dbfc9b56f915db1a1d0cae18b9bc43 WatchSource:0}: Error finding container fccc39fb358a4fd1f48b4313ad62e81339dbfc9b56f915db1a1d0cae18b9bc43: Status 404 returned error can't find the container with id fccc39fb358a4fd1f48b4313ad62e81339dbfc9b56f915db1a1d0cae18b9bc43 Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.152935 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8fwvd"] Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.153808 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.155700 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.156192 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.156715 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.156953 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.157995 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.159027 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.159370 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.171247 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.181433 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.200150 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.214780 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.229085 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.246508 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.265388 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.278797 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.294638 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300249 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-systemd\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300295 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-cni-netd\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300313 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-systemd-units\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300331 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-run-netns\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300434 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-etc-openvswitch\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300519 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-run-ovn-kubernetes\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300539 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovnkube-script-lib\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300557 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhhj\" (UniqueName: \"kubernetes.io/projected/e6815a1b-56eb-4075-84ae-1af5d0dcb742-kube-api-access-5zhhj\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300589 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-openvswitch\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300610 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-kubelet\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300627 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-var-lib-openvswitch\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300641 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-node-log\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300672 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovnkube-config\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300694 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-log-socket\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300743 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300763 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovn-node-metrics-cert\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300777 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-cni-bin\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300799 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-env-overrides\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300816 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-slash\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.300860 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-ovn\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.319116 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.339413 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eda1e717c10a3060d3fa87127fb2907bf5d5686f42038ba830e5106271c7977\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:52Z\\\",\\\"message\\\":\\\"W0129 15:10:41.834292 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:10:41.834677 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769699441 cert, and key in /tmp/serving-cert-584024862/serving-signer.crt, /tmp/serving-cert-584024862/serving-signer.key\\\\nI0129 15:10:42.075370 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:10:42.076714 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:10:42.076833 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:42.080940 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-584024862/tls.crt::/tmp/serving-cert-584024862/tls.key\\\\\\\"\\\\nF0129 15:10:52.667593 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.353650 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.372644 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.386748 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.390057 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:34:24.247642633 +0000 UTC Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402302 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402347 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovn-node-metrics-cert\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402372 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-cni-bin\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402403 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-env-overrides\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402424 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-slash\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402445 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-ovn\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402442 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402465 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-run-netns\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402503 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-run-netns\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402522 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-systemd\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402543 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-cni-netd\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402560 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-systemd-units\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402577 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-etc-openvswitch\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402618 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-run-ovn-kubernetes\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402634 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-openvswitch\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402648 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovnkube-script-lib\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402662 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zhhj\" (UniqueName: \"kubernetes.io/projected/e6815a1b-56eb-4075-84ae-1af5d0dcb742-kube-api-access-5zhhj\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402679 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-var-lib-openvswitch\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402694 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-kubelet\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402709 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-node-log\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402725 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovnkube-config\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402742 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-log-socket\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402807 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-log-socket\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402838 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-systemd\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402846 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-cni-bin\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402907 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-etc-openvswitch\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402868 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-cni-netd\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402891 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-systemd-units\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402946 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-run-ovn-kubernetes\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.402968 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-openvswitch\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.403013 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-kubelet\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.403288 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-var-lib-openvswitch\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.403320 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-slash\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.403323 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-node-log\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.403359 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-ovn\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.403632 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-env-overrides\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.403756 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovnkube-script-lib\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.403830 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovnkube-config\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.407839 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovn-node-metrics-cert\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.420967 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zhhj\" (UniqueName: \"kubernetes.io/projected/e6815a1b-56eb-4075-84ae-1af5d0dcb742-kube-api-access-5zhhj\") pod \"ovnkube-node-8fwvd\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.488247 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:01 crc kubenswrapper[4757]: W0129 15:11:01.499737 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6815a1b_56eb_4075_84ae_1af5d0dcb742.slice/crio-a6d8043f83fa78c26bbeae3a6dd10dc81f4827963e795853d866a7b857c693e1 WatchSource:0}: Error finding container a6d8043f83fa78c26bbeae3a6dd10dc81f4827963e795853d866a7b857c693e1: Status 404 returned error can't find the container with id a6d8043f83fa78c26bbeae3a6dd10dc81f4827963e795853d866a7b857c693e1 Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.555658 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638"} Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.557386 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qxr9t" event={"ID":"3ac5eae5-5794-458e-b182-a3203b6638d1","Type":"ContainerStarted","Data":"564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e"} Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.559385 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerStarted","Data":"a6d8043f83fa78c26bbeae3a6dd10dc81f4827963e795853d866a7b857c693e1"} Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.560857 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.563145 4757 scope.go:117] "RemoveContainer" containerID="1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08" Jan 29 15:11:01 crc kubenswrapper[4757]: E0129 15:11:01.563336 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.563882 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" event={"ID":"2ad19a70-dd88-4323-b98b-ae01159e0c64","Type":"ContainerStarted","Data":"61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05"} Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.563927 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" event={"ID":"2ad19a70-dd88-4323-b98b-ae01159e0c64","Type":"ContainerStarted","Data":"fccc39fb358a4fd1f48b4313ad62e81339dbfc9b56f915db1a1d0cae18b9bc43"} Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.565207 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db"} Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.565241 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0"} Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.565258 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"ced7fd6d22408f76df060600dfc804021bafc72901f41f822fea7f7a896db3a0"} Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.571606 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.584000 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.596340 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.611337 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.629084 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.641332 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.654810 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.670296 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.683708 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.696882 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.716490 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.730283 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1eda1e717c10a3060d3fa87127fb2907bf5d5686f42038ba830e5106271c7977\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:52Z\\\",\\\"message\\\":\\\"W0129 15:10:41.834292 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:10:41.834677 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769699441 cert, and key in /tmp/serving-cert-584024862/serving-signer.crt, /tmp/serving-cert-584024862/serving-signer.key\\\\nI0129 15:10:42.075370 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:10:42.076714 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:10:42.076833 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:42.080940 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-584024862/tls.crt::/tmp/serving-cert-584024862/tls.key\\\\\\\"\\\\nF0129 15:10:52.667593 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.751212 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.794843 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.834404 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.872056 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.915125 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.950585 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:01 crc kubenswrapper[4757]: I0129 15:11:01.989108 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:01Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.010864 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.011054 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:11:06.011029057 +0000 UTC m=+29.300279294 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.029672 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.086433 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.111602 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.111654 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.111689 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.111720 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.111763 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.111785 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.111792 4757 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.111875 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:06.111858115 +0000 UTC m=+29.401108352 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.111805 4757 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.111959 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:06.111942787 +0000 UTC m=+29.401193084 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.111804 4757 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.111995 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:06.111988009 +0000 UTC m=+29.401238346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.112216 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.112229 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.112240 4757 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.112297 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:06.112282937 +0000 UTC m=+29.401533174 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.114295 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.157402 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.202302 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.233143 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.269258 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.309991 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.350256 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.390493 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:56:47.725366608 +0000 UTC Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.395977 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.396087 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.396139 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.396191 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.396346 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:02 crc kubenswrapper[4757]: E0129 15:11:02.396398 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.572523 4757 generic.go:334] "Generic (PLEG): container finished" podID="2ad19a70-dd88-4323-b98b-ae01159e0c64" containerID="61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05" exitCode=0 Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.572623 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" event={"ID":"2ad19a70-dd88-4323-b98b-ae01159e0c64","Type":"ContainerDied","Data":"61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05"} Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.574777 4757 generic.go:334] "Generic (PLEG): container finished" podID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerID="30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c" exitCode=0 Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.575105 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c"} Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.601557 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.629911 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.644086 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.657835 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.672976 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.684687 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.694080 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.706628 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.729697 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.750336 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.790179 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.876319 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.889851 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.912559 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.950011 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:02 crc kubenswrapper[4757]: I0129 15:11:02.988847 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:02Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.035167 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.070098 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.107661 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.124210 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.124798 4757 scope.go:117] "RemoveContainer" containerID="1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08" Jan 29 15:11:03 crc kubenswrapper[4757]: E0129 15:11:03.124963 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.153114 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.194020 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.229658 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.270639 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.318198 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.352691 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.391040 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 19:18:37.771540918 +0000 UTC Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.392154 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.434463 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.469938 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.581917 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerStarted","Data":"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407"} Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.581982 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerStarted","Data":"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd"} Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.581995 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerStarted","Data":"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa"} Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.584848 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" event={"ID":"2ad19a70-dd88-4323-b98b-ae01159e0c64","Type":"ContainerStarted","Data":"89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231"} Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.600330 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.615319 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.628120 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.649327 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.669376 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.715385 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.755991 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.797138 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.833193 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.872966 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.910185 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.950998 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:03 crc kubenswrapper[4757]: I0129 15:11:03.989617 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:03Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.030331 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.133671 4757 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.136303 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.136338 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.136347 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.136402 4757 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.144968 4757 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.145386 4757 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.146612 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.146645 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.146655 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.146675 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.146686 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:04Z","lastTransitionTime":"2026-01-29T15:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:04 crc kubenswrapper[4757]: E0129 15:11:04.166881 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.171320 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.171356 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.171366 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.171381 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.171391 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:04Z","lastTransitionTime":"2026-01-29T15:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:04 crc kubenswrapper[4757]: E0129 15:11:04.185755 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.191480 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.191523 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.191534 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.191550 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.191560 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:04Z","lastTransitionTime":"2026-01-29T15:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:04 crc kubenswrapper[4757]: E0129 15:11:04.207014 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.210432 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.210469 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.210479 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.210497 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.210508 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:04Z","lastTransitionTime":"2026-01-29T15:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:04 crc kubenswrapper[4757]: E0129 15:11:04.222750 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.226700 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.226723 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.226732 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.226744 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.226753 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:04Z","lastTransitionTime":"2026-01-29T15:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:04 crc kubenswrapper[4757]: E0129 15:11:04.237201 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: E0129 15:11:04.237569 4757 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.238834 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.238861 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.238872 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.238887 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.238898 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:04Z","lastTransitionTime":"2026-01-29T15:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.341214 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.341249 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.341259 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.341289 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.341301 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:04Z","lastTransitionTime":"2026-01-29T15:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.391631 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:25:31.768671751 +0000 UTC Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.395975 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.396005 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.396066 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:04 crc kubenswrapper[4757]: E0129 15:11:04.396205 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:04 crc kubenswrapper[4757]: E0129 15:11:04.396343 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:04 crc kubenswrapper[4757]: E0129 15:11:04.396458 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.443565 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.443604 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.443616 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.443633 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.443644 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:04Z","lastTransitionTime":"2026-01-29T15:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.545616 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.545643 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.545660 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.545675 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.545684 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:04Z","lastTransitionTime":"2026-01-29T15:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.589765 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerStarted","Data":"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1"} Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.589807 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerStarted","Data":"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02"} Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.589820 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerStarted","Data":"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323"} Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.591168 4757 generic.go:334] "Generic (PLEG): container finished" podID="2ad19a70-dd88-4323-b98b-ae01159e0c64" containerID="89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231" exitCode=0 Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.591199 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" event={"ID":"2ad19a70-dd88-4323-b98b-ae01159e0c64","Type":"ContainerDied","Data":"89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231"} Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.608227 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.620245 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.636462 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.647603 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.647645 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.647655 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.647669 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.647678 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:04Z","lastTransitionTime":"2026-01-29T15:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.661347 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.673546 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.688685 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.703420 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.715699 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.726827 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.744881 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.755118 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.755150 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.755159 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.755172 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.755181 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:04Z","lastTransitionTime":"2026-01-29T15:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.757531 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.769871 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.783295 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.795162 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:04Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.858175 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.858215 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.858225 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.858241 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.858250 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:04Z","lastTransitionTime":"2026-01-29T15:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.959877 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.959930 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.959946 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.959964 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:04 crc kubenswrapper[4757]: I0129 15:11:04.959977 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:04Z","lastTransitionTime":"2026-01-29T15:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.062200 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.062237 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.062249 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.062279 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.062293 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:05Z","lastTransitionTime":"2026-01-29T15:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.164478 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.164510 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.164519 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.164532 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.164541 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:05Z","lastTransitionTime":"2026-01-29T15:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.267191 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.267257 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.267289 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.267318 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.267331 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:05Z","lastTransitionTime":"2026-01-29T15:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.369946 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.369981 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.369989 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.370002 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.370014 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:05Z","lastTransitionTime":"2026-01-29T15:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.392441 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:14:33.020676467 +0000 UTC Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.472855 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.472907 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.472918 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.472935 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.472945 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:05Z","lastTransitionTime":"2026-01-29T15:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.575900 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.575941 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.575952 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.575969 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.575980 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:05Z","lastTransitionTime":"2026-01-29T15:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.596583 4757 generic.go:334] "Generic (PLEG): container finished" podID="2ad19a70-dd88-4323-b98b-ae01159e0c64" containerID="0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad" exitCode=0 Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.596674 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" event={"ID":"2ad19a70-dd88-4323-b98b-ae01159e0c64","Type":"ContainerDied","Data":"0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad"} Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.617571 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.633602 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.652795 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.678320 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.678362 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.678374 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.678392 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.678402 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:05Z","lastTransitionTime":"2026-01-29T15:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.680662 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.714290 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.735065 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.774472 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.791248 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.791310 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.791324 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.791341 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.791351 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:05Z","lastTransitionTime":"2026-01-29T15:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.794902 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.808010 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.821251 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.836806 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.861372 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.873090 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.884498 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.894252 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.894305 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.894316 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.894331 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.894340 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:05Z","lastTransitionTime":"2026-01-29T15:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.996745 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.996784 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.996792 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.996807 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:05 crc kubenswrapper[4757]: I0129 15:11:05.996816 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:05Z","lastTransitionTime":"2026-01-29T15:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.059851 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.060096 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:11:14.060062657 +0000 UTC m=+37.349312904 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.098359 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.098415 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.098429 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.098446 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.098461 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:06Z","lastTransitionTime":"2026-01-29T15:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.161193 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.161251 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.161314 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.161333 4757 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.161379 4757 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.161391 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:14.161376299 +0000 UTC m=+37.450626536 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.161341 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.161416 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.161527 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.161545 4757 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.161423 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:14.16141196 +0000 UTC m=+37.450662197 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.161581 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.161667 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.161606 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:14.161592255 +0000 UTC m=+37.450842492 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.161693 4757 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.161791 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:14.1617565 +0000 UTC m=+37.451006917 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.201408 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.201466 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.201481 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.201500 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.201513 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:06Z","lastTransitionTime":"2026-01-29T15:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.304111 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.304145 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.304153 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.304165 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.304174 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:06Z","lastTransitionTime":"2026-01-29T15:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.393211 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:35:27.281239644 +0000 UTC Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.395561 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.395560 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.395802 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.395692 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.395580 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:06 crc kubenswrapper[4757]: E0129 15:11:06.395899 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.406072 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.406108 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.406116 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.406130 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.406139 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:06Z","lastTransitionTime":"2026-01-29T15:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.508872 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.508916 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.508928 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.508945 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.508957 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:06Z","lastTransitionTime":"2026-01-29T15:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.610726 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.610781 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.610791 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.610812 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.610825 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:06Z","lastTransitionTime":"2026-01-29T15:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.611948 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" event={"ID":"2ad19a70-dd88-4323-b98b-ae01159e0c64","Type":"ContainerStarted","Data":"b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e"} Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.628938 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.654015 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.667460 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.683872 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.696651 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.707763 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.713170 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.713211 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.713223 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.713241 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.713251 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:06Z","lastTransitionTime":"2026-01-29T15:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.718372 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.727618 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.738107 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.747925 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.758417 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.769623 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.783195 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.796260 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.815358 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.815407 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.815419 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.815438 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.815446 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:06Z","lastTransitionTime":"2026-01-29T15:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.917460 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.917521 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.917536 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.917564 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:06 crc kubenswrapper[4757]: I0129 15:11:06.917595 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:06Z","lastTransitionTime":"2026-01-29T15:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.019925 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.020156 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.020293 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.020384 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.020506 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:07Z","lastTransitionTime":"2026-01-29T15:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.123189 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.123245 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.123256 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.123287 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.123299 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:07Z","lastTransitionTime":"2026-01-29T15:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.195691 4757 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.225826 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.225868 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.225883 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.225903 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.225915 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:07Z","lastTransitionTime":"2026-01-29T15:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.329956 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.330402 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.330473 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.330546 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.330630 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:07Z","lastTransitionTime":"2026-01-29T15:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.393393 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:12:25.194551676 +0000 UTC Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.409608 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.422677 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.432606 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.432642 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.432653 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.432669 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.432681 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:07Z","lastTransitionTime":"2026-01-29T15:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.435934 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.448806 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.464379 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.482904 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.494371 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.506804 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.523057 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.533991 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.534717 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.534844 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.534934 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.535031 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.535111 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:07Z","lastTransitionTime":"2026-01-29T15:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.549337 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.561665 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.570457 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.578846 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.618047 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerStarted","Data":"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9"} Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.621322 4757 generic.go:334] "Generic (PLEG): container finished" podID="2ad19a70-dd88-4323-b98b-ae01159e0c64" containerID="b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e" exitCode=0 Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.621396 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" event={"ID":"2ad19a70-dd88-4323-b98b-ae01159e0c64","Type":"ContainerDied","Data":"b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e"} Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.633413 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.638581 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.638617 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.638628 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.638645 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.638656 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:07Z","lastTransitionTime":"2026-01-29T15:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.648368 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.662714 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.677073 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.691983 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.705975 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.718916 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.729684 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.741780 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.741818 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.741833 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.741852 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.741864 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:07Z","lastTransitionTime":"2026-01-29T15:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.741949 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.756674 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.769972 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.788223 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.810589 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.826096 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.844801 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.844836 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.844846 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.844860 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.844871 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:07Z","lastTransitionTime":"2026-01-29T15:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.948305 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.948344 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.948352 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.948365 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:07 crc kubenswrapper[4757]: I0129 15:11:07.948375 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:07Z","lastTransitionTime":"2026-01-29T15:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.051769 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.051826 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.051838 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.051857 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.051869 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:08Z","lastTransitionTime":"2026-01-29T15:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.154365 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.154413 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.154428 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.154447 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.154459 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:08Z","lastTransitionTime":"2026-01-29T15:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.257215 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.257282 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.257297 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.257319 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.257331 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:08Z","lastTransitionTime":"2026-01-29T15:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.360152 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.360206 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.360218 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.360237 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.360250 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:08Z","lastTransitionTime":"2026-01-29T15:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.393602 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 23:55:30.136062262 +0000 UTC Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.396226 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.396284 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.396301 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:08 crc kubenswrapper[4757]: E0129 15:11:08.396420 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:08 crc kubenswrapper[4757]: E0129 15:11:08.396509 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:08 crc kubenswrapper[4757]: E0129 15:11:08.396618 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.462352 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.462382 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.462391 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.462703 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.462745 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:08Z","lastTransitionTime":"2026-01-29T15:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.564978 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.565011 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.565019 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.565033 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.565044 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:08Z","lastTransitionTime":"2026-01-29T15:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.634502 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" event={"ID":"2ad19a70-dd88-4323-b98b-ae01159e0c64","Type":"ContainerStarted","Data":"afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de"} Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.650684 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.664299 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.667187 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.667218 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.667248 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.667277 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.667290 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:08Z","lastTransitionTime":"2026-01-29T15:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.675289 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.687090 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.699580 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.714170 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.728197 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.744018 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.769951 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.770210 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.770298 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.770368 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.770435 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:08Z","lastTransitionTime":"2026-01-29T15:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.774724 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.789148 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.802317 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.817505 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.834588 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.848443 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.873808 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.873866 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.873881 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.873902 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.873916 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:08Z","lastTransitionTime":"2026-01-29T15:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.976758 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.976803 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.976814 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.976835 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:08 crc kubenswrapper[4757]: I0129 15:11:08.976847 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:08Z","lastTransitionTime":"2026-01-29T15:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.079720 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.079766 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.079776 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.079792 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.079803 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:09Z","lastTransitionTime":"2026-01-29T15:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.182004 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.182044 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.182054 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.182069 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.182079 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:09Z","lastTransitionTime":"2026-01-29T15:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.284777 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.284821 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.284830 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.284843 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.284854 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:09Z","lastTransitionTime":"2026-01-29T15:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.386838 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.386888 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.386899 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.386930 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.386943 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:09Z","lastTransitionTime":"2026-01-29T15:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.394232 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:16:26.402861349 +0000 UTC Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.489828 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.489872 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.489882 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.489901 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.489912 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:09Z","lastTransitionTime":"2026-01-29T15:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.592337 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.592376 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.592389 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.592406 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.592416 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:09Z","lastTransitionTime":"2026-01-29T15:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.646857 4757 generic.go:334] "Generic (PLEG): container finished" podID="2ad19a70-dd88-4323-b98b-ae01159e0c64" containerID="afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de" exitCode=0 Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.646938 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" event={"ID":"2ad19a70-dd88-4323-b98b-ae01159e0c64","Type":"ContainerDied","Data":"afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de"} Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.659551 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerStarted","Data":"531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8"} Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.659923 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.659944 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.660081 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.664235 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.682714 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.695572 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.696026 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.696400 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.696439 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.696448 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.696466 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.696476 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:09Z","lastTransitionTime":"2026-01-29T15:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.699446 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.705079 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.715495 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.726866 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.741716 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.753219 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.765513 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.778891 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.793859 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.800659 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.800708 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.800724 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.800741 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.800754 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:09Z","lastTransitionTime":"2026-01-29T15:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.809338 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.837895 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.853550 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.868968 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.884847 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.903000 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.903410 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.903461 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.903478 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.903501 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.903521 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:09Z","lastTransitionTime":"2026-01-29T15:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.917469 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.932119 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.946167 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.962776 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.979757 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:09 crc kubenswrapper[4757]: I0129 15:11:09.995892 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.005162 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.005198 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.005207 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.005221 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.005231 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:10Z","lastTransitionTime":"2026-01-29T15:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.008916 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.030308 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.046163 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.068396 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.083774 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.107553 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.107584 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.107592 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.107607 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.107615 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:10Z","lastTransitionTime":"2026-01-29T15:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.210342 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.210377 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.210387 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.210404 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.210415 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:10Z","lastTransitionTime":"2026-01-29T15:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.312660 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.312696 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.312706 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.312720 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.312731 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:10Z","lastTransitionTime":"2026-01-29T15:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.394366 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 01:00:10.551483288 +0000 UTC Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.395692 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:10 crc kubenswrapper[4757]: E0129 15:11:10.395821 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.396230 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:10 crc kubenswrapper[4757]: E0129 15:11:10.396329 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.396392 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:10 crc kubenswrapper[4757]: E0129 15:11:10.396454 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.415501 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.415556 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.415570 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.415589 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.415600 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:10Z","lastTransitionTime":"2026-01-29T15:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.518058 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.518099 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.518109 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.518124 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.518133 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:10Z","lastTransitionTime":"2026-01-29T15:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.620813 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.621277 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.621289 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.621302 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.621311 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:10Z","lastTransitionTime":"2026-01-29T15:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.665525 4757 generic.go:334] "Generic (PLEG): container finished" podID="2ad19a70-dd88-4323-b98b-ae01159e0c64" containerID="3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91" exitCode=0 Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.666467 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" event={"ID":"2ad19a70-dd88-4323-b98b-ae01159e0c64","Type":"ContainerDied","Data":"3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91"} Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.693587 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.708490 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.722668 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.723936 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.723962 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.723983 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.724001 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.724014 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:10Z","lastTransitionTime":"2026-01-29T15:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.736213 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.750074 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.765334 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.779625 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.796463 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.828151 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.828197 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.828207 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.828227 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.828238 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:10Z","lastTransitionTime":"2026-01-29T15:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.863320 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.891147 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.914447 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.929920 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.929970 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.929983 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.930001 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.930014 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:10Z","lastTransitionTime":"2026-01-29T15:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.932862 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.946446 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:10 crc kubenswrapper[4757]: I0129 15:11:10.959150 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.035806 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.035855 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.035867 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.035884 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.035900 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:11Z","lastTransitionTime":"2026-01-29T15:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.138433 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.138485 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.138493 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.138510 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.138522 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:11Z","lastTransitionTime":"2026-01-29T15:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.241400 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.241440 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.241451 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.241467 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.241482 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:11Z","lastTransitionTime":"2026-01-29T15:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.344493 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.344535 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.344545 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.344568 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.344577 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:11Z","lastTransitionTime":"2026-01-29T15:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.394504 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:32:16.042304928 +0000 UTC Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.446404 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.446440 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.446457 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.446472 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.446483 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:11Z","lastTransitionTime":"2026-01-29T15:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.549705 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.550458 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.550500 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.550525 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.550539 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:11Z","lastTransitionTime":"2026-01-29T15:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.653704 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.653753 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.653764 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.653780 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.653794 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:11Z","lastTransitionTime":"2026-01-29T15:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.672434 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" event={"ID":"2ad19a70-dd88-4323-b98b-ae01159e0c64","Type":"ContainerStarted","Data":"51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b"} Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.694363 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.709318 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.721665 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.733957 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.747473 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.756471 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.756535 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.756545 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.756565 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.756578 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:11Z","lastTransitionTime":"2026-01-29T15:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.762551 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.773073 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.783438 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.812915 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.824599 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.840493 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.854898 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.858653 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.858689 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.858698 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.858715 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.858725 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:11Z","lastTransitionTime":"2026-01-29T15:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.872892 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.887070 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.965434 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.965472 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.965481 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.965495 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:11 crc kubenswrapper[4757]: I0129 15:11:11.965506 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:11Z","lastTransitionTime":"2026-01-29T15:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.067501 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.067539 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.067548 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.067563 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.067574 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:12Z","lastTransitionTime":"2026-01-29T15:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.169733 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.169780 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.169789 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.169803 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.169814 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:12Z","lastTransitionTime":"2026-01-29T15:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.272867 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.272904 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.272912 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.272927 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.272938 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:12Z","lastTransitionTime":"2026-01-29T15:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.374818 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.374851 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.374861 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.374877 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.374890 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:12Z","lastTransitionTime":"2026-01-29T15:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.395807 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:26:21.404103391 +0000 UTC Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.395969 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.395971 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:12 crc kubenswrapper[4757]: E0129 15:11:12.396100 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.396118 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:12 crc kubenswrapper[4757]: E0129 15:11:12.396230 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:12 crc kubenswrapper[4757]: E0129 15:11:12.396318 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.477747 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.477784 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.477795 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.477810 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.477822 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:12Z","lastTransitionTime":"2026-01-29T15:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.575456 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7"] Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.575943 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:12 crc kubenswrapper[4757]: W0129 15:11:12.579878 4757 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: secrets "ovn-kubernetes-control-plane-dockercfg-gs7dd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 29 15:11:12 crc kubenswrapper[4757]: E0129 15:11:12.579927 4757 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-control-plane-dockercfg-gs7dd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.580077 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.580864 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.580901 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.580914 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.580930 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.580943 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:12Z","lastTransitionTime":"2026-01-29T15:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.596715 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.610605 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.624536 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.635453 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9qk4\" (UniqueName: \"kubernetes.io/projected/18611b4b-3eb0-4d3c-a9b1-1899616e8ac3-kube-api-access-k9qk4\") pod \"ovnkube-control-plane-749d76644c-6v5r7\" (UID: \"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.635504 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18611b4b-3eb0-4d3c-a9b1-1899616e8ac3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6v5r7\" (UID: \"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.635530 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18611b4b-3eb0-4d3c-a9b1-1899616e8ac3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6v5r7\" (UID: \"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.635547 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18611b4b-3eb0-4d3c-a9b1-1899616e8ac3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6v5r7\" (UID: \"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.642066 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.655792 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.667039 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.682469 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.682504 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.682512 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.682538 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.682558 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:12Z","lastTransitionTime":"2026-01-29T15:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.684130 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.696942 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.708437 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.719845 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.730875 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.736772 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18611b4b-3eb0-4d3c-a9b1-1899616e8ac3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6v5r7\" (UID: \"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.736844 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18611b4b-3eb0-4d3c-a9b1-1899616e8ac3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6v5r7\" (UID: \"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.736929 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9qk4\" (UniqueName: \"kubernetes.io/projected/18611b4b-3eb0-4d3c-a9b1-1899616e8ac3-kube-api-access-k9qk4\") pod \"ovnkube-control-plane-749d76644c-6v5r7\" (UID: \"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.736984 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18611b4b-3eb0-4d3c-a9b1-1899616e8ac3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6v5r7\" (UID: \"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.744868 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.762437 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.777460 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.784812 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.784859 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.784868 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.784882 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.784891 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:12Z","lastTransitionTime":"2026-01-29T15:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.790351 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.880259 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18611b4b-3eb0-4d3c-a9b1-1899616e8ac3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6v5r7\" (UID: \"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.880344 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18611b4b-3eb0-4d3c-a9b1-1899616e8ac3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6v5r7\" (UID: \"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.880583 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18611b4b-3eb0-4d3c-a9b1-1899616e8ac3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6v5r7\" (UID: \"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.882948 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9qk4\" (UniqueName: \"kubernetes.io/projected/18611b4b-3eb0-4d3c-a9b1-1899616e8ac3-kube-api-access-k9qk4\") pod \"ovnkube-control-plane-749d76644c-6v5r7\" (UID: \"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.886676 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.886717 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.886729 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.886746 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.886757 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:12Z","lastTransitionTime":"2026-01-29T15:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.989322 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.989378 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.989393 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.989414 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:12 crc kubenswrapper[4757]: I0129 15:11:12.989469 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:12Z","lastTransitionTime":"2026-01-29T15:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.091190 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.091483 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.091560 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.091621 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.091694 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:13Z","lastTransitionTime":"2026-01-29T15:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.193969 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.194259 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.194375 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.194460 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.194536 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:13Z","lastTransitionTime":"2026-01-29T15:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.297560 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.297589 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.297599 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.297613 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.297624 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:13Z","lastTransitionTime":"2026-01-29T15:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.396659 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:58:06.15930989 +0000 UTC Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.400006 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.400050 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.400094 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.400114 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.400127 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:13Z","lastTransitionTime":"2026-01-29T15:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.502434 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.502468 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.502479 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.502491 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.502499 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:13Z","lastTransitionTime":"2026-01-29T15:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.580809 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.581815 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.609536 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.609573 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.609583 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.609598 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.609608 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:13Z","lastTransitionTime":"2026-01-29T15:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.679081 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" event={"ID":"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3","Type":"ContainerStarted","Data":"f941fefda5fc09e606d87b15bfde71c9fcf47ece5294ee764b49619c1e3a24ab"} Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.681827 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-drtf8"] Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.682245 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:13 crc kubenswrapper[4757]: E0129 15:11:13.682326 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.696443 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.708664 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.711363 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.711394 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.711404 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.711419 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.711429 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:13Z","lastTransitionTime":"2026-01-29T15:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.721810 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.739594 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.747093 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bcff\" (UniqueName: \"kubernetes.io/projected/8c722d3b-1755-4633-967e-35591890a231-kube-api-access-7bcff\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.747145 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.752462 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.762872 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.774527 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.786080 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.795872 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.805154 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.813127 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.813162 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.813170 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.813184 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.813193 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:13Z","lastTransitionTime":"2026-01-29T15:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.817772 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.834932 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.847157 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.847707 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bcff\" (UniqueName: \"kubernetes.io/projected/8c722d3b-1755-4633-967e-35591890a231-kube-api-access-7bcff\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.847763 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:13 crc kubenswrapper[4757]: E0129 15:11:13.847887 4757 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:13 crc kubenswrapper[4757]: E0129 15:11:13.847937 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs podName:8c722d3b-1755-4633-967e-35591890a231 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:14.347922179 +0000 UTC m=+37.637172436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs") pod "network-metrics-daemon-drtf8" (UID: "8c722d3b-1755-4633-967e-35591890a231") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.860759 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.865849 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bcff\" (UniqueName: \"kubernetes.io/projected/8c722d3b-1755-4633-967e-35591890a231-kube-api-access-7bcff\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.872243 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.885629 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:13Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.915211 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.915245 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.915255 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.915290 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:13 crc kubenswrapper[4757]: I0129 15:11:13.915302 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:13Z","lastTransitionTime":"2026-01-29T15:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.017624 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.017659 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.017673 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.017688 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.017701 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.121073 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.121407 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.121418 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.121432 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.121441 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.151778 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.151964 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:11:30.151936717 +0000 UTC m=+53.441186954 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.224249 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.224290 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.224300 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.224314 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.224323 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.253058 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.253122 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.253149 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.253185 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.253337 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.253369 4757 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.253387 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.253400 4757 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.253409 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.253437 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.253453 4757 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.253457 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:30.253423555 +0000 UTC m=+53.542673792 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.253476 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:30.253470646 +0000 UTC m=+53.542720873 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.253519 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:30.253498677 +0000 UTC m=+53.542749094 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.253555 4757 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.253602 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:30.253577129 +0000 UTC m=+53.542827366 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.326838 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.326867 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.326876 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.326889 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.326899 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.345639 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.345685 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.345697 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.345714 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.345726 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.353591 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.353718 4757 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.353765 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs podName:8c722d3b-1755-4633-967e-35591890a231 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:15.353752398 +0000 UTC m=+38.643002625 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs") pod "network-metrics-daemon-drtf8" (UID: "8c722d3b-1755-4633-967e-35591890a231") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.359148 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.364621 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.364704 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.364722 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.364739 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.364766 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.376541 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.386340 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.386380 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.386395 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.386413 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.386423 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.395615 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.395709 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.395614 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.395744 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.395891 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.396025 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.397406 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:45:47.936807361 +0000 UTC Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.397816 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.402026 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.402057 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.402067 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.402086 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.402097 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.414641 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.421735 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.421773 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.421787 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.421804 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.421817 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.435675 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: E0129 15:11:14.435851 4757 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.437604 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.437655 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.437668 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.437689 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.437702 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.540422 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.540462 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.540472 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.540486 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.540495 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.643402 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.643442 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.643454 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.643470 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.643482 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.682802 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/0.log" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.684883 4757 generic.go:334] "Generic (PLEG): container finished" podID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerID="531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8" exitCode=1 Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.684933 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.685781 4757 scope.go:117] "RemoveContainer" containerID="531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.686807 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" event={"ID":"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3","Type":"ContainerStarted","Data":"6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.686833 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" event={"ID":"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3","Type":"ContainerStarted","Data":"f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.700935 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.715457 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.729757 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.745508 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.745555 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.745567 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.745582 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.745593 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.750132 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:11:14.286124 5952 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:11:14.286198 5952 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:11:14.286604 5952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:11:14.286620 5952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:11:14.286634 5952 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 15:11:14.286687 5952 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:11:14.286699 5952 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 15:11:14.286714 5952 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:11:14.286724 5952 factory.go:656] Stopping watch factory\\\\nI0129 15:11:14.286731 5952 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:11:14.286739 5952 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:11:14.286769 5952 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:11:14.286786 5952 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.766060 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.778622 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.795248 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.810221 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.820208 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.830350 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.843875 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.847738 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.847792 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.847803 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.847828 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.847858 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.862155 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.875935 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.889774 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.903530 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.916877 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.930934 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.950846 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.950905 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.950916 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.950933 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.950945 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:14Z","lastTransitionTime":"2026-01-29T15:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.952875 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.963404 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.975506 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:14 crc kubenswrapper[4757]: I0129 15:11:14.988235 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:14Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.003895 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.019256 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.032414 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.047297 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.053241 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.053285 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.053298 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.053311 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.053320 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:15Z","lastTransitionTime":"2026-01-29T15:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.061662 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.076619 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.090578 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.141541 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.155777 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.155820 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.155829 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.155844 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.155854 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:15Z","lastTransitionTime":"2026-01-29T15:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.169463 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:11:14.286124 5952 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:11:14.286198 5952 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:11:14.286604 5952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:11:14.286620 5952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:11:14.286634 5952 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 15:11:14.286687 5952 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:11:14.286699 5952 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 15:11:14.286714 5952 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:11:14.286724 5952 factory.go:656] Stopping watch factory\\\\nI0129 15:11:14.286731 5952 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:11:14.286739 5952 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:11:14.286769 5952 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:11:14.286786 5952 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.183719 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.197884 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.258072 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.258132 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.258142 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.258158 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.258167 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:15Z","lastTransitionTime":"2026-01-29T15:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.360880 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.360921 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.360935 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.360951 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.360962 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:15Z","lastTransitionTime":"2026-01-29T15:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.364422 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:15 crc kubenswrapper[4757]: E0129 15:11:15.364561 4757 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:15 crc kubenswrapper[4757]: E0129 15:11:15.364624 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs podName:8c722d3b-1755-4633-967e-35591890a231 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:17.364606783 +0000 UTC m=+40.653857020 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs") pod "network-metrics-daemon-drtf8" (UID: "8c722d3b-1755-4633-967e-35591890a231") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.395609 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:15 crc kubenswrapper[4757]: E0129 15:11:15.395772 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.397512 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:57:59.001009027 +0000 UTC Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.463123 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.463441 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.463531 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.463616 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.463707 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:15Z","lastTransitionTime":"2026-01-29T15:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.565731 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.565790 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.565805 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.565822 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.565834 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:15Z","lastTransitionTime":"2026-01-29T15:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.667581 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.667624 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.667641 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.667654 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.667663 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:15Z","lastTransitionTime":"2026-01-29T15:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.692057 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/1.log" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.693213 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/0.log" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.695808 4757 generic.go:334] "Generic (PLEG): container finished" podID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerID="779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064" exitCode=1 Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.695889 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064"} Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.695960 4757 scope.go:117] "RemoveContainer" containerID="531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.697501 4757 scope.go:117] "RemoveContainer" containerID="779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064" Jan 29 15:11:15 crc kubenswrapper[4757]: E0129 15:11:15.697668 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.715511 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.730327 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.744484 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.762886 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:11:14.286124 5952 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:11:14.286198 5952 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:11:14.286604 5952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:11:14.286620 5952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:11:14.286634 5952 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 15:11:14.286687 5952 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:11:14.286699 5952 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 15:11:14.286714 5952 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:11:14.286724 5952 factory.go:656] Stopping watch factory\\\\nI0129 15:11:14.286731 5952 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:11:14.286739 5952 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:11:14.286769 5952 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:11:14.286786 5952 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:15Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:11:15.613977 6173 model_client.go:382] Update operations generated as: [{Op:update Table:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.769511 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.769910 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.770049 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.770144 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.770499 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:15Z","lastTransitionTime":"2026-01-29T15:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.776354 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.790160 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.803850 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.817220 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.827560 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.837716 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.854577 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.866595 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.872786 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.872829 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.872863 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.872880 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.872892 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:15Z","lastTransitionTime":"2026-01-29T15:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.878577 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.889851 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.902689 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.915500 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.975825 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.975913 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.975928 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.975946 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:15 crc kubenswrapper[4757]: I0129 15:11:15.975958 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:15Z","lastTransitionTime":"2026-01-29T15:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.077753 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.077793 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.077811 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.077825 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.077837 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:16Z","lastTransitionTime":"2026-01-29T15:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.180115 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.180154 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.180164 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.180179 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.180189 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:16Z","lastTransitionTime":"2026-01-29T15:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.282812 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.283147 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.283157 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.283170 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.283179 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:16Z","lastTransitionTime":"2026-01-29T15:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.385496 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.385535 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.385546 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.385562 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.385572 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:16Z","lastTransitionTime":"2026-01-29T15:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.396037 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.396410 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.396389 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:16 crc kubenswrapper[4757]: E0129 15:11:16.396737 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:16 crc kubenswrapper[4757]: E0129 15:11:16.397051 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:16 crc kubenswrapper[4757]: E0129 15:11:16.396935 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.398205 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:28:11.756959814 +0000 UTC Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.487831 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.488110 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.488216 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.488337 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.488462 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:16Z","lastTransitionTime":"2026-01-29T15:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.591652 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.591690 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.591700 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.591714 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.591725 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:16Z","lastTransitionTime":"2026-01-29T15:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.694146 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.694189 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.694200 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.694220 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.694233 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:16Z","lastTransitionTime":"2026-01-29T15:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.700308 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/1.log" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.796965 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.797010 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.797019 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.797038 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.797050 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:16Z","lastTransitionTime":"2026-01-29T15:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.898969 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.899010 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.899022 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.899038 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:16 crc kubenswrapper[4757]: I0129 15:11:16.899050 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:16Z","lastTransitionTime":"2026-01-29T15:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.001347 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.001382 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.001394 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.001411 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.001424 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:17Z","lastTransitionTime":"2026-01-29T15:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.104227 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.104288 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.104300 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.104317 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.104329 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:17Z","lastTransitionTime":"2026-01-29T15:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.206630 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.206667 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.206677 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.206693 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.206709 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:17Z","lastTransitionTime":"2026-01-29T15:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.309128 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.309164 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.309173 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.309186 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.309195 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:17Z","lastTransitionTime":"2026-01-29T15:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.396451 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:17 crc kubenswrapper[4757]: E0129 15:11:17.396610 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.396699 4757 scope.go:117] "RemoveContainer" containerID="1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.401638 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:17 crc kubenswrapper[4757]: E0129 15:11:17.401906 4757 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:17 crc kubenswrapper[4757]: E0129 15:11:17.402029 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs podName:8c722d3b-1755-4633-967e-35591890a231 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:21.401964505 +0000 UTC m=+44.691214742 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs") pod "network-metrics-daemon-drtf8" (UID: "8c722d3b-1755-4633-967e-35591890a231") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.398428 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:05:51.415834045 +0000 UTC Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.413494 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.413539 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.413552 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.413571 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.413583 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:17Z","lastTransitionTime":"2026-01-29T15:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.419556 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.432923 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.447495 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.470298 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.484075 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.496231 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.511912 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.515995 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.516050 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.516065 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.516081 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.516092 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:17Z","lastTransitionTime":"2026-01-29T15:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.527377 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.540005 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.553180 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.565308 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.580591 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.596382 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.610234 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.619199 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.619243 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.619258 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.619328 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.619340 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:17Z","lastTransitionTime":"2026-01-29T15:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.625571 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.643902 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:11:14.286124 5952 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:11:14.286198 5952 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:11:14.286604 5952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:11:14.286620 5952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:11:14.286634 5952 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 15:11:14.286687 5952 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:11:14.286699 5952 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 15:11:14.286714 5952 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:11:14.286724 5952 factory.go:656] Stopping watch factory\\\\nI0129 15:11:14.286731 5952 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:11:14.286739 5952 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:11:14.286769 5952 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:11:14.286786 5952 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:15Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:11:15.613977 6173 model_client.go:382] Update operations generated as: [{Op:update Table:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.707837 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.710366 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9"} Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.710827 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.721173 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.721198 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.721207 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.721220 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.721229 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:17Z","lastTransitionTime":"2026-01-29T15:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.724053 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.737685 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.748354 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.758622 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.770400 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.781736 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.793343 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.812486 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.823079 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.823114 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.823123 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.823136 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.823145 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:17Z","lastTransitionTime":"2026-01-29T15:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.842838 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.872656 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://531bb7cc353cb099ea32705b2ebe2f3c4103ffe69df9226f1888964d4a3e75a8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"message\\\":\\\"crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:11:14.286124 5952 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:11:14.286198 5952 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 15:11:14.286604 5952 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:11:14.286620 5952 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:11:14.286634 5952 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 15:11:14.286687 5952 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:11:14.286699 5952 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 15:11:14.286714 5952 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:11:14.286724 5952 factory.go:656] Stopping watch factory\\\\nI0129 15:11:14.286731 5952 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:11:14.286739 5952 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:11:14.286769 5952 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 15:11:14.286786 5952 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:15Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:11:15.613977 6173 model_client.go:382] Update operations generated as: [{Op:update Table:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.885737 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.894824 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.907964 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.919978 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.925569 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.925615 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.925627 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.925644 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.925658 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:17Z","lastTransitionTime":"2026-01-29T15:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.929143 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:17 crc kubenswrapper[4757]: I0129 15:11:17.938133 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:17Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.028572 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.028896 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.028976 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.029054 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.029138 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:18Z","lastTransitionTime":"2026-01-29T15:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.131748 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.131791 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.131800 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.131816 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.131826 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:18Z","lastTransitionTime":"2026-01-29T15:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.234318 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.234359 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.234368 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.234385 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.234397 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:18Z","lastTransitionTime":"2026-01-29T15:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.337353 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.337390 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.337401 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.337418 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.337429 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:18Z","lastTransitionTime":"2026-01-29T15:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.395554 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.395603 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:18 crc kubenswrapper[4757]: E0129 15:11:18.395946 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:18 crc kubenswrapper[4757]: E0129 15:11:18.396064 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.395630 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:18 crc kubenswrapper[4757]: E0129 15:11:18.396521 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.403983 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 14:10:12.160715207 +0000 UTC Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.439863 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.440068 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.440161 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.440228 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.440319 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:18Z","lastTransitionTime":"2026-01-29T15:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.542621 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.542873 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.543035 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.543147 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.543214 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:18Z","lastTransitionTime":"2026-01-29T15:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.645475 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.645515 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.645537 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.645564 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.645583 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:18Z","lastTransitionTime":"2026-01-29T15:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.747695 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.747734 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.747747 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.747764 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.747775 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:18Z","lastTransitionTime":"2026-01-29T15:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.850059 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.850104 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.850113 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.850129 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.850139 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:18Z","lastTransitionTime":"2026-01-29T15:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.952425 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.952747 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.952831 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.952916 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:18 crc kubenswrapper[4757]: I0129 15:11:18.952976 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:18Z","lastTransitionTime":"2026-01-29T15:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.055044 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.055085 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.055096 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.055112 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.055124 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:19Z","lastTransitionTime":"2026-01-29T15:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.157646 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.157698 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.157709 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.157727 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.157737 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:19Z","lastTransitionTime":"2026-01-29T15:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.259791 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.259828 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.259836 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.259850 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.259859 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:19Z","lastTransitionTime":"2026-01-29T15:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.362510 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.362547 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.362563 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.362579 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.362589 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:19Z","lastTransitionTime":"2026-01-29T15:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.395928 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:19 crc kubenswrapper[4757]: E0129 15:11:19.396073 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.404686 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:57:37.715877317 +0000 UTC Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.464631 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.464687 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.464699 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.464718 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.464730 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:19Z","lastTransitionTime":"2026-01-29T15:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.567395 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.567447 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.567462 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.567482 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.567497 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:19Z","lastTransitionTime":"2026-01-29T15:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.670403 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.670447 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.670460 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.670476 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.670535 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:19Z","lastTransitionTime":"2026-01-29T15:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.773188 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.773255 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.773310 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.773331 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.773347 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:19Z","lastTransitionTime":"2026-01-29T15:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.875568 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.875645 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.875657 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.875671 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.875681 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:19Z","lastTransitionTime":"2026-01-29T15:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.978636 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.978676 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.978686 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.978701 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:19 crc kubenswrapper[4757]: I0129 15:11:19.978710 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:19Z","lastTransitionTime":"2026-01-29T15:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.081478 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.081514 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.081521 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.081535 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.081543 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:20Z","lastTransitionTime":"2026-01-29T15:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.184082 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.184111 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.184121 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.184133 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.184141 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:20Z","lastTransitionTime":"2026-01-29T15:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.286382 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.286435 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.286449 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.286469 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.286482 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:20Z","lastTransitionTime":"2026-01-29T15:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.389285 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.389338 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.389361 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.389383 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.389398 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:20Z","lastTransitionTime":"2026-01-29T15:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.396010 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.396032 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.396075 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:20 crc kubenswrapper[4757]: E0129 15:11:20.396150 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:20 crc kubenswrapper[4757]: E0129 15:11:20.396250 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:20 crc kubenswrapper[4757]: E0129 15:11:20.396405 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.405335 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:33:11.017878028 +0000 UTC Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.493826 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.493870 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.493880 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.493898 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.493911 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:20Z","lastTransitionTime":"2026-01-29T15:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.595896 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.595939 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.595951 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.595968 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.595979 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:20Z","lastTransitionTime":"2026-01-29T15:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.698463 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.698500 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.698516 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.698532 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.698541 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:20Z","lastTransitionTime":"2026-01-29T15:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.801499 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.801562 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.801574 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.801591 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.801627 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:20Z","lastTransitionTime":"2026-01-29T15:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.904066 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.904794 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.904806 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.904820 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:20 crc kubenswrapper[4757]: I0129 15:11:20.904831 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:20Z","lastTransitionTime":"2026-01-29T15:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.007554 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.007617 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.007629 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.007653 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.007666 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:21Z","lastTransitionTime":"2026-01-29T15:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.110307 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.110362 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.110380 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.110397 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.110416 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:21Z","lastTransitionTime":"2026-01-29T15:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.213030 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.213086 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.213095 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.213110 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.213119 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:21Z","lastTransitionTime":"2026-01-29T15:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.316800 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.316860 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.316872 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.316889 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.316902 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:21Z","lastTransitionTime":"2026-01-29T15:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.396337 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:21 crc kubenswrapper[4757]: E0129 15:11:21.396478 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.405574 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:43:16.948424655 +0000 UTC Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.420581 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.420625 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.420634 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.420665 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.420675 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:21Z","lastTransitionTime":"2026-01-29T15:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.445214 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:21 crc kubenswrapper[4757]: E0129 15:11:21.445457 4757 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:21 crc kubenswrapper[4757]: E0129 15:11:21.445557 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs podName:8c722d3b-1755-4633-967e-35591890a231 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:29.445539506 +0000 UTC m=+52.734789743 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs") pod "network-metrics-daemon-drtf8" (UID: "8c722d3b-1755-4633-967e-35591890a231") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.522905 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.522938 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.522947 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.522960 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.522970 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:21Z","lastTransitionTime":"2026-01-29T15:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.625679 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.625724 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.625734 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.625752 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.625762 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:21Z","lastTransitionTime":"2026-01-29T15:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.727580 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.727627 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.727635 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.727649 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.727658 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:21Z","lastTransitionTime":"2026-01-29T15:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.830385 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.830425 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.830438 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.830497 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.830517 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:21Z","lastTransitionTime":"2026-01-29T15:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.933230 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.933309 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.933321 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.933335 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:21 crc kubenswrapper[4757]: I0129 15:11:21.933346 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:21Z","lastTransitionTime":"2026-01-29T15:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.036154 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.036193 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.036208 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.036224 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.036235 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:22Z","lastTransitionTime":"2026-01-29T15:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.137920 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.137969 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.137980 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.137996 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.138008 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:22Z","lastTransitionTime":"2026-01-29T15:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.240578 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.240621 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.240635 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.240650 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.240661 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:22Z","lastTransitionTime":"2026-01-29T15:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.343592 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.343632 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.343646 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.343666 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.343681 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:22Z","lastTransitionTime":"2026-01-29T15:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.395628 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.395760 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:22 crc kubenswrapper[4757]: E0129 15:11:22.395810 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:22 crc kubenswrapper[4757]: E0129 15:11:22.395953 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.395760 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:22 crc kubenswrapper[4757]: E0129 15:11:22.396173 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.405797 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:03:33.32063175 +0000 UTC Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.447222 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.447255 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.447279 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.447303 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.447313 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:22Z","lastTransitionTime":"2026-01-29T15:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.550866 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.551333 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.551346 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.551364 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.551378 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:22Z","lastTransitionTime":"2026-01-29T15:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.654705 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.654763 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.654780 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.654802 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.654818 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:22Z","lastTransitionTime":"2026-01-29T15:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.757483 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.757521 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.757530 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.757542 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.757551 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:22Z","lastTransitionTime":"2026-01-29T15:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.860147 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.860192 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.860206 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.860226 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.860238 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:22Z","lastTransitionTime":"2026-01-29T15:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.963180 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.963254 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.963297 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.963311 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:22 crc kubenswrapper[4757]: I0129 15:11:22.963320 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:22Z","lastTransitionTime":"2026-01-29T15:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.064820 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.064856 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.064868 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.064882 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.064894 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:23Z","lastTransitionTime":"2026-01-29T15:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.166902 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.166945 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.166956 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.166973 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.166986 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:23Z","lastTransitionTime":"2026-01-29T15:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.269541 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.269577 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.269587 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.269600 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.269612 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:23Z","lastTransitionTime":"2026-01-29T15:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.373287 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.373335 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.373347 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.373367 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.373379 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:23Z","lastTransitionTime":"2026-01-29T15:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.396395 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:23 crc kubenswrapper[4757]: E0129 15:11:23.396625 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.407026 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:55:23.329553424 +0000 UTC Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.476536 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.476604 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.476619 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.476646 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.476665 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:23Z","lastTransitionTime":"2026-01-29T15:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.579509 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.579547 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.579560 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.579579 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.579592 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:23Z","lastTransitionTime":"2026-01-29T15:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.682525 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.682562 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.682569 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.682582 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.682591 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:23Z","lastTransitionTime":"2026-01-29T15:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.784856 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.784915 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.784926 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.784939 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.784949 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:23Z","lastTransitionTime":"2026-01-29T15:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.887720 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.887774 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.887786 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.887803 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.887816 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:23Z","lastTransitionTime":"2026-01-29T15:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.989990 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.990027 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.990036 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.990051 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:23 crc kubenswrapper[4757]: I0129 15:11:23.990060 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:23Z","lastTransitionTime":"2026-01-29T15:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.092726 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.092784 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.092799 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.092822 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.092835 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.195894 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.195936 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.195947 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.195966 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.195980 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.299328 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.299382 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.299402 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.299431 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.299448 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.395313 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.395360 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.395322 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:24 crc kubenswrapper[4757]: E0129 15:11:24.395494 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:24 crc kubenswrapper[4757]: E0129 15:11:24.395601 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:24 crc kubenswrapper[4757]: E0129 15:11:24.395730 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.401710 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.401746 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.401757 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.401792 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.401822 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.407497 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:43:51.033879918 +0000 UTC Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.504055 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.504118 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.504133 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.504154 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.504172 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.565419 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.565480 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.565494 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.565513 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.565528 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: E0129 15:11:24.586307 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.590655 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.590706 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.590717 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.590735 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.591072 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: E0129 15:11:24.616691 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.620433 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.620723 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.620880 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.621031 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.621160 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: E0129 15:11:24.641171 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.644642 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.644676 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.644687 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.644703 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.644712 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: E0129 15:11:24.661911 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.665596 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.665634 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.665644 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.665658 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.665669 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: E0129 15:11:24.677719 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:24Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:24 crc kubenswrapper[4757]: E0129 15:11:24.677882 4757 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.679327 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.679355 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.679364 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.679397 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.679409 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.782491 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.782557 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.782571 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.782589 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.782599 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.886236 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.886324 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.886337 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.886352 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.886363 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.989205 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.989275 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.989285 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.989300 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:24 crc kubenswrapper[4757]: I0129 15:11:24.989310 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:24Z","lastTransitionTime":"2026-01-29T15:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.092045 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.092096 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.092110 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.092128 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.092510 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:25Z","lastTransitionTime":"2026-01-29T15:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.194564 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.194605 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.194616 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.194634 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.194644 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:25Z","lastTransitionTime":"2026-01-29T15:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.296334 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.296370 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.296379 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.296392 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.296400 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:25Z","lastTransitionTime":"2026-01-29T15:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.395572 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:25 crc kubenswrapper[4757]: E0129 15:11:25.395755 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.398308 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.398350 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.398367 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.398386 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.398398 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:25Z","lastTransitionTime":"2026-01-29T15:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.408606 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 05:40:14.887365099 +0000 UTC Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.500352 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.500402 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.500414 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.500431 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.500445 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:25Z","lastTransitionTime":"2026-01-29T15:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.602595 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.602631 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.602640 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.602652 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.602662 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:25Z","lastTransitionTime":"2026-01-29T15:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.705891 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.705962 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.705980 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.706006 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.706034 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:25Z","lastTransitionTime":"2026-01-29T15:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.808109 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.808191 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.808207 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.808224 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.808236 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:25Z","lastTransitionTime":"2026-01-29T15:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.912926 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.912970 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.912979 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.912995 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:25 crc kubenswrapper[4757]: I0129 15:11:25.913005 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:25Z","lastTransitionTime":"2026-01-29T15:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.014857 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.014903 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.014917 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.014936 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.014951 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:26Z","lastTransitionTime":"2026-01-29T15:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.118381 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.118454 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.118473 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.118499 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.118517 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:26Z","lastTransitionTime":"2026-01-29T15:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.221739 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.221818 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.221835 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.221885 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.221904 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:26Z","lastTransitionTime":"2026-01-29T15:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.324828 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.324900 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.324914 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.324932 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.324946 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:26Z","lastTransitionTime":"2026-01-29T15:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.395607 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.395691 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:26 crc kubenswrapper[4757]: E0129 15:11:26.395716 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.395801 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:26 crc kubenswrapper[4757]: E0129 15:11:26.396123 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:26 crc kubenswrapper[4757]: E0129 15:11:26.396230 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.396421 4757 scope.go:117] "RemoveContainer" containerID="779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.408806 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 14:33:28.22329337 +0000 UTC Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.413167 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.427502 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.428081 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.428130 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.428146 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.428168 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.428182 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:26Z","lastTransitionTime":"2026-01-29T15:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.438930 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.452250 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.466976 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.478255 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.491394 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.502823 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.513792 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.523500 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.530761 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.530812 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.530824 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.530841 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.530854 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:26Z","lastTransitionTime":"2026-01-29T15:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.533406 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.546543 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.559811 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.580045 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.602711 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:15Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:11:15.613977 6173 model_client.go:382] Update operations generated as: [{Op:update Table:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.615592 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.632970 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.633033 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.633048 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.633082 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.633093 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:26Z","lastTransitionTime":"2026-01-29T15:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.736118 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.736178 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.736191 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.736224 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.736237 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:26Z","lastTransitionTime":"2026-01-29T15:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.740654 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/1.log" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.743344 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerStarted","Data":"068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280"} Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.839035 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.839078 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.839091 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.839110 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.839122 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:26Z","lastTransitionTime":"2026-01-29T15:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.941148 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.941206 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.941222 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.941250 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:26 crc kubenswrapper[4757]: I0129 15:11:26.941313 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:26Z","lastTransitionTime":"2026-01-29T15:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.044110 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.044160 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.044176 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.044198 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.044214 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:27Z","lastTransitionTime":"2026-01-29T15:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.145954 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.146287 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.146370 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.146439 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.146496 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:27Z","lastTransitionTime":"2026-01-29T15:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.248828 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.248866 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.248875 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.248890 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.248901 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:27Z","lastTransitionTime":"2026-01-29T15:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.351517 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.351826 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.351909 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.352000 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.352069 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:27Z","lastTransitionTime":"2026-01-29T15:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.395892 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:27 crc kubenswrapper[4757]: E0129 15:11:27.396013 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.407465 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.409147 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 11:58:24.068511596 +0000 UTC Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.417413 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.430987 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.441514 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.453815 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.453853 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.453862 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.453875 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.453886 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:27Z","lastTransitionTime":"2026-01-29T15:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.457565 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.472439 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.486914 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.499957 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.513229 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.525082 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.538699 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.553024 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.555451 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.555496 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.555507 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.555523 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.555534 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:27Z","lastTransitionTime":"2026-01-29T15:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.576422 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:15Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:11:15.613977 6173 model_client.go:382] Update operations generated as: [{Op:update Table:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.593666 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.608151 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.622114 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.657594 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.657916 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.658039 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.658161 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.658313 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:27Z","lastTransitionTime":"2026-01-29T15:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.746470 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.760953 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.760982 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.760990 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.761004 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.761013 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:27Z","lastTransitionTime":"2026-01-29T15:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.768500 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.782650 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.795784 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.810600 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.824058 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.836511 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.852155 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.863050 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.863088 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.863099 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.863116 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.863127 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:27Z","lastTransitionTime":"2026-01-29T15:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.864344 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.875900 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.887103 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.904772 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.919860 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.933769 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.953060 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.965861 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.965894 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.965902 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.965916 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.965926 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:27Z","lastTransitionTime":"2026-01-29T15:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.973517 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:15Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:11:15.613977 6173 model_client.go:382] Update operations generated as: [{Op:update Table:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:27 crc kubenswrapper[4757]: I0129 15:11:27.997626 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.067762 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.068238 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.068390 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.068443 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.068463 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:28Z","lastTransitionTime":"2026-01-29T15:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.171284 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.171318 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.171328 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.171346 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.171356 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:28Z","lastTransitionTime":"2026-01-29T15:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.273447 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.273485 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.273496 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.273511 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.273521 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:28Z","lastTransitionTime":"2026-01-29T15:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.375986 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.376227 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.376412 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.376511 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.376592 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:28Z","lastTransitionTime":"2026-01-29T15:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.396304 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:28 crc kubenswrapper[4757]: E0129 15:11:28.396466 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.396500 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:28 crc kubenswrapper[4757]: E0129 15:11:28.396626 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.396741 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:28 crc kubenswrapper[4757]: E0129 15:11:28.396931 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.409505 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:47:41.871736505 +0000 UTC Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.478658 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.478942 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.479024 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.479100 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.479157 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:28Z","lastTransitionTime":"2026-01-29T15:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.581969 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.582024 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.582038 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.582058 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.582072 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:28Z","lastTransitionTime":"2026-01-29T15:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.684792 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.685057 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.685078 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.685106 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.685127 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:28Z","lastTransitionTime":"2026-01-29T15:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.752070 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/2.log" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.752988 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/1.log" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.756943 4757 generic.go:334] "Generic (PLEG): container finished" podID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerID="068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280" exitCode=1 Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.756992 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280"} Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.757035 4757 scope.go:117] "RemoveContainer" containerID="779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.758128 4757 scope.go:117] "RemoveContainer" containerID="068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280" Jan 29 15:11:28 crc kubenswrapper[4757]: E0129 15:11:28.758486 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.783141 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.788537 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.788580 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.788591 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.788610 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.788624 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:28Z","lastTransitionTime":"2026-01-29T15:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.799324 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.811773 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.825803 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.839153 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.843395 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.853648 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.854802 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.869052 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.881173 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.890430 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.890467 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.890484 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.890501 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.890510 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:28Z","lastTransitionTime":"2026-01-29T15:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.895680 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.915429 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:15Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:11:15.613977 6173 model_client.go:382] Update operations generated as: [{Op:update Table:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:27Z\\\",\\\"message\\\":\\\"c3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:11:27.604186 6329 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b05b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.945102 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.962971 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.986761 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.992035 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.992064 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.992073 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.992089 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:28 crc kubenswrapper[4757]: I0129 15:11:28.992100 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:28Z","lastTransitionTime":"2026-01-29T15:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.000767 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.012749 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.022718 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.035565 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.054469 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://779eaca07ca2b75783e222d7855e1cbd6d1a77125a5bdedd50700aa286854064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:15Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:43933d5e-3c3b-4ff8-8926-04ac25de450e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:11:15.613977 6173 model_client.go:382] Update operations generated as: [{Op:update Table:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:27Z\\\",\\\"message\\\":\\\"c3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:11:27.604186 6329 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b05b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.067774 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.079938 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.091499 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.094390 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.094426 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.094455 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.094473 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.094483 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:29Z","lastTransitionTime":"2026-01-29T15:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.102878 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.116845 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.127491 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.136536 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.148023 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.157673 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cc9943-8670-4bdc-a5c0-b7f5260603f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ea7daad0c4fdedbde11ce3aefb6b535e868f6af94052fae4e04ab8cb9192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6396476b582de4682f63061b8c38324ea5e530d9227f520bad41f5fab1e3fa50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9b463efd0451a165de686ba00645d799cddbf305a6f7227954dac78bbfe53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.169502 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.179327 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.188371 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.196495 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.196530 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.196538 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.196551 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.196559 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:29Z","lastTransitionTime":"2026-01-29T15:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.204894 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.215175 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.225121 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.299162 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.299207 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.299217 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.299245 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.299255 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:29Z","lastTransitionTime":"2026-01-29T15:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.396334 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:29 crc kubenswrapper[4757]: E0129 15:11:29.396566 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.401897 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.401969 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.401984 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.402004 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.402020 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:29Z","lastTransitionTime":"2026-01-29T15:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.410576 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:46:47.83415359 +0000 UTC Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.505455 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.505494 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.505506 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.505520 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.505530 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:29Z","lastTransitionTime":"2026-01-29T15:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.526608 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:29 crc kubenswrapper[4757]: E0129 15:11:29.526818 4757 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:29 crc kubenswrapper[4757]: E0129 15:11:29.526917 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs podName:8c722d3b-1755-4633-967e-35591890a231 nodeName:}" failed. No retries permitted until 2026-01-29 15:11:45.52689664 +0000 UTC m=+68.816146947 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs") pod "network-metrics-daemon-drtf8" (UID: "8c722d3b-1755-4633-967e-35591890a231") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.608252 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.608308 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.608319 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.608332 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.608342 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:29Z","lastTransitionTime":"2026-01-29T15:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.710831 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.710896 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.710917 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.710943 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.710964 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:29Z","lastTransitionTime":"2026-01-29T15:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.762651 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/2.log" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.768488 4757 scope.go:117] "RemoveContainer" containerID="068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280" Jan 29 15:11:29 crc kubenswrapper[4757]: E0129 15:11:29.769659 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.814529 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.814595 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.814621 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.814653 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.814676 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:29Z","lastTransitionTime":"2026-01-29T15:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.819331 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.848495 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:27Z\\\",\\\"message\\\":\\\"c3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:11:27.604186 6329 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b05b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.868316 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.883338 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.917180 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.917216 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.917225 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.917241 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.917251 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:29Z","lastTransitionTime":"2026-01-29T15:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.918753 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.930280 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.943615 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.956684 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.970999 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.981822 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:29 crc kubenswrapper[4757]: I0129 15:11:29.991994 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cc9943-8670-4bdc-a5c0-b7f5260603f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ea7daad0c4fdedbde11ce3aefb6b535e868f6af94052fae4e04ab8cb9192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6396476b582de4682f63061b8c38324ea5e530d9227f520bad41f5fab1e3fa50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9b463efd0451a165de686ba00645d799cddbf305a6f7227954dac78bbfe53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:29Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.001634 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.010462 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.018778 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.018805 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.018812 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.018824 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.018832 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:30Z","lastTransitionTime":"2026-01-29T15:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.020983 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.032395 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.043375 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.054022 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:30Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.120889 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.120928 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.120939 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.120954 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.120963 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:30Z","lastTransitionTime":"2026-01-29T15:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.223858 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.223934 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.223964 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.223980 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.224005 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:30Z","lastTransitionTime":"2026-01-29T15:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.233604 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.233851 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:02.233807667 +0000 UTC m=+85.523057934 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.326927 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.327391 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.327603 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.327817 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.328025 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:30Z","lastTransitionTime":"2026-01-29T15:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.334509 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.334550 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.334578 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.334605 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.334704 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.334718 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.334728 4757 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.334768 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:02.334756579 +0000 UTC m=+85.624006816 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.335049 4757 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.335075 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:02.335068348 +0000 UTC m=+85.624318585 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.335112 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.335121 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.335128 4757 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.335146 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:02.33514022 +0000 UTC m=+85.624390457 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.339014 4757 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.339159 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:02.339128426 +0000 UTC m=+85.628378663 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.396004 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.396123 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.396491 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.396561 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.396578 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:30 crc kubenswrapper[4757]: E0129 15:11:30.396749 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.411174 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 06:44:13.245303851 +0000 UTC Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.431064 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.431093 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.431103 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.431118 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.431128 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:30Z","lastTransitionTime":"2026-01-29T15:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.533039 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.533075 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.533083 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.533097 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.533107 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:30Z","lastTransitionTime":"2026-01-29T15:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.636073 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.636123 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.636135 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.636153 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.636164 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:30Z","lastTransitionTime":"2026-01-29T15:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.739041 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.739098 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.739114 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.739138 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.739157 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:30Z","lastTransitionTime":"2026-01-29T15:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.841980 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.842257 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.842370 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.842488 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.842574 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:30Z","lastTransitionTime":"2026-01-29T15:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.945217 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.945515 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.945580 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.945639 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:30 crc kubenswrapper[4757]: I0129 15:11:30.945701 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:30Z","lastTransitionTime":"2026-01-29T15:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.022534 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.041060 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.048188 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.048218 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.048230 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.048244 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.048253 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:31Z","lastTransitionTime":"2026-01-29T15:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.053337 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.063657 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.076428 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.091561 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cc9943-8670-4bdc-a5c0-b7f5260603f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ea7daad0c4fdedbde11ce3aefb6b535e868f6af94052fae4e04ab8cb9192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6396476b582de4682f63061b8c38324ea5e530d9227f520bad41f5fab1e3fa50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9b463efd0451a165de686ba00645d799cddbf305a6f7227954dac78bbfe53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.103635 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.117602 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.131157 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.145191 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.150505 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.150568 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.150581 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.150603 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.150616 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:31Z","lastTransitionTime":"2026-01-29T15:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.158785 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.171925 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.187074 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.205158 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:27Z\\\",\\\"message\\\":\\\"c3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:11:27.604186 6329 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b05b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.218256 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.232079 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.244077 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.253629 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.253693 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.253706 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.253743 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.253757 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:31Z","lastTransitionTime":"2026-01-29T15:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.254161 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:31Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.355888 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.355923 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.355932 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.355946 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.355955 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:31Z","lastTransitionTime":"2026-01-29T15:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.396509 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:31 crc kubenswrapper[4757]: E0129 15:11:31.396691 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.412021 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:23:36.385622687 +0000 UTC Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.458228 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.458305 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.458322 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.458342 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.458362 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:31Z","lastTransitionTime":"2026-01-29T15:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.562004 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.562060 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.562077 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.562097 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.562121 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:31Z","lastTransitionTime":"2026-01-29T15:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.664897 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.664975 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.665011 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.665029 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.665046 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:31Z","lastTransitionTime":"2026-01-29T15:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.767481 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.767513 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.767522 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.767535 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.767544 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:31Z","lastTransitionTime":"2026-01-29T15:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.870378 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.870415 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.870424 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.870437 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.870446 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:31Z","lastTransitionTime":"2026-01-29T15:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.972726 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.972771 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.972782 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.972799 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:31 crc kubenswrapper[4757]: I0129 15:11:31.972814 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:31Z","lastTransitionTime":"2026-01-29T15:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.081027 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.081084 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.081099 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.081119 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.081140 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:32Z","lastTransitionTime":"2026-01-29T15:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.183789 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.183851 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.183861 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.183875 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.183884 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:32Z","lastTransitionTime":"2026-01-29T15:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.286886 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.286917 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.286925 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.286938 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.286947 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:32Z","lastTransitionTime":"2026-01-29T15:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.389405 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.389475 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.389484 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.389500 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.389509 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:32Z","lastTransitionTime":"2026-01-29T15:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.395632 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.395792 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:32 crc kubenswrapper[4757]: E0129 15:11:32.395905 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.395961 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:32 crc kubenswrapper[4757]: E0129 15:11:32.396067 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:32 crc kubenswrapper[4757]: E0129 15:11:32.396149 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.412481 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:24:38.946972448 +0000 UTC Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.491710 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.491755 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.491771 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.491789 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.491805 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:32Z","lastTransitionTime":"2026-01-29T15:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.594864 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.594909 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.594921 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.594938 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.595023 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:32Z","lastTransitionTime":"2026-01-29T15:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.697814 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.698387 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.698510 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.698619 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.698702 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:32Z","lastTransitionTime":"2026-01-29T15:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.801846 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.801890 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.801900 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.801914 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.801924 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:32Z","lastTransitionTime":"2026-01-29T15:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.904523 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.904561 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.904606 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.904636 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:32 crc kubenswrapper[4757]: I0129 15:11:32.904648 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:32Z","lastTransitionTime":"2026-01-29T15:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.007116 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.007156 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.007167 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.007185 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.007197 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:33Z","lastTransitionTime":"2026-01-29T15:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.109179 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.109213 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.109224 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.109240 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.109252 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:33Z","lastTransitionTime":"2026-01-29T15:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.210830 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.210897 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.210906 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.210920 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.210928 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:33Z","lastTransitionTime":"2026-01-29T15:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.312915 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.312962 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.312975 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.312996 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.313011 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:33Z","lastTransitionTime":"2026-01-29T15:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.398158 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:33 crc kubenswrapper[4757]: E0129 15:11:33.398325 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.412905 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:50:48.852745812 +0000 UTC Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.415617 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.415649 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.415666 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.415682 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.415691 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:33Z","lastTransitionTime":"2026-01-29T15:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.517600 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.517634 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.517645 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.517660 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.517671 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:33Z","lastTransitionTime":"2026-01-29T15:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.620195 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.620248 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.620260 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.620304 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.620317 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:33Z","lastTransitionTime":"2026-01-29T15:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.723106 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.723166 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.723178 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.723198 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.723212 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:33Z","lastTransitionTime":"2026-01-29T15:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.825593 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.825638 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.825651 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.825668 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.825681 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:33Z","lastTransitionTime":"2026-01-29T15:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.928129 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.928171 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.928194 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.928222 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:33 crc kubenswrapper[4757]: I0129 15:11:33.928237 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:33Z","lastTransitionTime":"2026-01-29T15:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.031955 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.032027 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.032049 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.032073 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.032089 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:34Z","lastTransitionTime":"2026-01-29T15:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.134945 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.135011 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.135028 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.135050 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.135069 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:34Z","lastTransitionTime":"2026-01-29T15:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.238099 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.238156 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.238166 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.238178 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.238187 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:34Z","lastTransitionTime":"2026-01-29T15:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.340182 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.340217 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.340225 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.340240 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.340249 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:34Z","lastTransitionTime":"2026-01-29T15:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.395994 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.396075 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:34 crc kubenswrapper[4757]: E0129 15:11:34.396120 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.396182 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:34 crc kubenswrapper[4757]: E0129 15:11:34.396313 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:34 crc kubenswrapper[4757]: E0129 15:11:34.396388 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.413159 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:12:58.327256836 +0000 UTC Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.442013 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.442083 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.442097 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.442114 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.442125 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:34Z","lastTransitionTime":"2026-01-29T15:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.544157 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.544207 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.544220 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.544238 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.544250 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:34Z","lastTransitionTime":"2026-01-29T15:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.646326 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.646382 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.646395 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.646410 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.646421 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:34Z","lastTransitionTime":"2026-01-29T15:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.749152 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.749217 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.749259 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.749361 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.749385 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:34Z","lastTransitionTime":"2026-01-29T15:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.852737 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.852781 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.852790 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.852815 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.852825 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:34Z","lastTransitionTime":"2026-01-29T15:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.929218 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.929256 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.929277 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.929295 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.929305 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:34Z","lastTransitionTime":"2026-01-29T15:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:34 crc kubenswrapper[4757]: E0129 15:11:34.944689 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:34Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.948832 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.948902 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.948917 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.948933 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.948944 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:34Z","lastTransitionTime":"2026-01-29T15:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:34 crc kubenswrapper[4757]: E0129 15:11:34.963300 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:34Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.967204 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.967250 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.967278 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.967295 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.967314 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:34Z","lastTransitionTime":"2026-01-29T15:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:34 crc kubenswrapper[4757]: E0129 15:11:34.982539 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:34Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.988899 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.988938 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.988947 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.988960 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:34 crc kubenswrapper[4757]: I0129 15:11:34.988970 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:34Z","lastTransitionTime":"2026-01-29T15:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:35 crc kubenswrapper[4757]: E0129 15:11:35.003528 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.008118 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.008191 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.008205 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.008223 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.008250 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:35Z","lastTransitionTime":"2026-01-29T15:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:35 crc kubenswrapper[4757]: E0129 15:11:35.022638 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:35 crc kubenswrapper[4757]: E0129 15:11:35.022798 4757 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.024869 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.024929 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.024951 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.024977 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.024991 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:35Z","lastTransitionTime":"2026-01-29T15:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.127684 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.127997 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.128083 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.128151 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.128219 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:35Z","lastTransitionTime":"2026-01-29T15:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.230966 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.231300 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.231394 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.231509 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.231572 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:35Z","lastTransitionTime":"2026-01-29T15:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.333978 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.334053 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.334069 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.334085 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.334128 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:35Z","lastTransitionTime":"2026-01-29T15:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.396292 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:35 crc kubenswrapper[4757]: E0129 15:11:35.396464 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.413414 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:39:03.853981217 +0000 UTC Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.436873 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.436910 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.436918 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.436934 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.436945 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:35Z","lastTransitionTime":"2026-01-29T15:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.539544 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.539603 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.539616 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.539632 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.539643 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:35Z","lastTransitionTime":"2026-01-29T15:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.642156 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.642233 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.642257 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.642331 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.642350 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:35Z","lastTransitionTime":"2026-01-29T15:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.745092 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.745411 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.745500 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.745585 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.745676 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:35Z","lastTransitionTime":"2026-01-29T15:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.848222 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.848488 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.848635 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.848734 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.848811 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:35Z","lastTransitionTime":"2026-01-29T15:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.952306 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.952746 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.952827 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.952915 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:35 crc kubenswrapper[4757]: I0129 15:11:35.953004 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:35Z","lastTransitionTime":"2026-01-29T15:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.056586 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.056633 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.056644 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.056659 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.056671 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:36Z","lastTransitionTime":"2026-01-29T15:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.159383 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.159748 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.159856 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.159976 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.160050 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:36Z","lastTransitionTime":"2026-01-29T15:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.262351 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.262419 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.262433 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.262455 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.262472 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:36Z","lastTransitionTime":"2026-01-29T15:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.365487 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.365550 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.365560 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.365586 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.365599 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:36Z","lastTransitionTime":"2026-01-29T15:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.395498 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.395566 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.395644 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:36 crc kubenswrapper[4757]: E0129 15:11:36.395705 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:36 crc kubenswrapper[4757]: E0129 15:11:36.395849 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:36 crc kubenswrapper[4757]: E0129 15:11:36.395966 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.414632 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:41:28.46645358 +0000 UTC Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.467893 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.467935 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.467971 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.467990 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.468001 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:36Z","lastTransitionTime":"2026-01-29T15:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.570664 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.570744 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.570756 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.570778 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.570793 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:36Z","lastTransitionTime":"2026-01-29T15:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.675311 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.675372 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.675387 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.675412 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.675429 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:36Z","lastTransitionTime":"2026-01-29T15:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.778541 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.779175 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.779211 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.779235 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.779248 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:36Z","lastTransitionTime":"2026-01-29T15:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.881850 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.881925 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.881951 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.881974 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.881993 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:36Z","lastTransitionTime":"2026-01-29T15:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.984868 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.984899 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.984908 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.984921 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:36 crc kubenswrapper[4757]: I0129 15:11:36.984930 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:36Z","lastTransitionTime":"2026-01-29T15:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.087482 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.087535 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.087546 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.087560 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.087570 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:37Z","lastTransitionTime":"2026-01-29T15:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.190460 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.190495 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.190503 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.190517 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.190528 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:37Z","lastTransitionTime":"2026-01-29T15:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.293613 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.293796 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.293818 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.293843 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.293858 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:37Z","lastTransitionTime":"2026-01-29T15:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.395471 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:37 crc kubenswrapper[4757]: E0129 15:11:37.395704 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.399209 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.399239 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.399284 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.399301 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.399313 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:37Z","lastTransitionTime":"2026-01-29T15:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.414615 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.414896 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:18:20.663680607 +0000 UTC Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.436691 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.458795 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.477639 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.491408 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.501847 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.501877 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.501886 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.501899 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.501907 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:37Z","lastTransitionTime":"2026-01-29T15:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.505203 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.523296 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.538501 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.556120 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.577148 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:27Z\\\",\\\"message\\\":\\\"c3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:11:27.604186 6329 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b05b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.594134 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.605006 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.605099 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.605118 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.605143 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.605159 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:37Z","lastTransitionTime":"2026-01-29T15:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.607334 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.622112 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.642951 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.658589 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cc9943-8670-4bdc-a5c0-b7f5260603f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ea7daad0c4fdedbde11ce3aefb6b535e868f6af94052fae4e04ab8cb9192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6396476b582de4682f63061b8c38324ea5e530d9227f520bad41f5fab1e3fa50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9b463efd0451a165de686ba00645d799cddbf305a6f7227954dac78bbfe53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.674379 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.684931 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:37Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.708291 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.708346 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.708358 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.708379 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.708390 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:37Z","lastTransitionTime":"2026-01-29T15:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.811710 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.811775 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.811791 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.811813 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.811825 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:37Z","lastTransitionTime":"2026-01-29T15:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.914248 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.914311 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.914333 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.914349 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:37 crc kubenswrapper[4757]: I0129 15:11:37.914361 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:37Z","lastTransitionTime":"2026-01-29T15:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.016740 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.016792 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.016800 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.016819 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.016829 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:38Z","lastTransitionTime":"2026-01-29T15:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.118357 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.118390 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.118398 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.118411 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.118419 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:38Z","lastTransitionTime":"2026-01-29T15:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.221662 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.221736 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.221748 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.221828 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.221857 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:38Z","lastTransitionTime":"2026-01-29T15:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.324328 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.324376 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.324385 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.324400 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.324411 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:38Z","lastTransitionTime":"2026-01-29T15:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.396374 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.396451 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.396411 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:38 crc kubenswrapper[4757]: E0129 15:11:38.396583 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:38 crc kubenswrapper[4757]: E0129 15:11:38.396736 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:38 crc kubenswrapper[4757]: E0129 15:11:38.396859 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.415676 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:20:43.83662494 +0000 UTC Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.426779 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.426840 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.426852 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.426869 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.426883 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:38Z","lastTransitionTime":"2026-01-29T15:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.529653 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.529705 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.529715 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.529730 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.529746 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:38Z","lastTransitionTime":"2026-01-29T15:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.631893 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.631994 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.632012 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.632035 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.632048 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:38Z","lastTransitionTime":"2026-01-29T15:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.735458 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.735534 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.735555 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.735579 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.735593 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:38Z","lastTransitionTime":"2026-01-29T15:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.838104 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.838163 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.838174 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.838194 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.838207 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:38Z","lastTransitionTime":"2026-01-29T15:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.940649 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.940706 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.940717 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.940743 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:38 crc kubenswrapper[4757]: I0129 15:11:38.940758 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:38Z","lastTransitionTime":"2026-01-29T15:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.044073 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.044163 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.044199 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.044231 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.044257 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:39Z","lastTransitionTime":"2026-01-29T15:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.148023 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.148133 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.148159 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.148192 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.148213 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:39Z","lastTransitionTime":"2026-01-29T15:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.252012 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.252079 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.252094 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.252112 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.252124 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:39Z","lastTransitionTime":"2026-01-29T15:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.355708 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.355806 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.355825 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.355854 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.355871 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:39Z","lastTransitionTime":"2026-01-29T15:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.395551 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:39 crc kubenswrapper[4757]: E0129 15:11:39.395741 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.416173 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:22:14.031828204 +0000 UTC Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.463081 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.463149 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.463159 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.463179 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.463191 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:39Z","lastTransitionTime":"2026-01-29T15:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.566661 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.566699 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.566711 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.566730 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.566745 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:39Z","lastTransitionTime":"2026-01-29T15:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.669947 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.670021 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.670032 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.670053 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.670065 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:39Z","lastTransitionTime":"2026-01-29T15:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.772904 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.772940 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.772952 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.772967 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.772978 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:39Z","lastTransitionTime":"2026-01-29T15:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.875556 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.875633 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.875657 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.875686 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.875708 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:39Z","lastTransitionTime":"2026-01-29T15:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.978203 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.978246 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.978259 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.978295 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:39 crc kubenswrapper[4757]: I0129 15:11:39.978307 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:39Z","lastTransitionTime":"2026-01-29T15:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.080908 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.080979 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.080999 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.081026 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.081047 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:40Z","lastTransitionTime":"2026-01-29T15:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.183627 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.183666 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.183678 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.183696 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.183708 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:40Z","lastTransitionTime":"2026-01-29T15:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.286682 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.286758 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.286786 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.286815 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.286837 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:40Z","lastTransitionTime":"2026-01-29T15:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.388919 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.388988 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.389006 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.389027 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.389043 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:40Z","lastTransitionTime":"2026-01-29T15:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.395547 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.395574 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.395642 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:40 crc kubenswrapper[4757]: E0129 15:11:40.395657 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:40 crc kubenswrapper[4757]: E0129 15:11:40.395745 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:40 crc kubenswrapper[4757]: E0129 15:11:40.395811 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.417301 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 08:21:44.467128304 +0000 UTC Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.491659 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.491702 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.491715 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.491732 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.491743 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:40Z","lastTransitionTime":"2026-01-29T15:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.594380 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.594427 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.594442 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.594460 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.594472 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:40Z","lastTransitionTime":"2026-01-29T15:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.697671 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.697718 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.697730 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.697748 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.697761 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:40Z","lastTransitionTime":"2026-01-29T15:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.800714 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.800747 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.800755 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.800769 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.800795 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:40Z","lastTransitionTime":"2026-01-29T15:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.903035 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.903070 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.903080 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.903094 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:40 crc kubenswrapper[4757]: I0129 15:11:40.903102 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:40Z","lastTransitionTime":"2026-01-29T15:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.005222 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.005256 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.005294 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.005309 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.005321 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:41Z","lastTransitionTime":"2026-01-29T15:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.108028 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.108364 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.108373 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.108389 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.108397 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:41Z","lastTransitionTime":"2026-01-29T15:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.210104 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.210136 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.210144 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.210158 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.210168 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:41Z","lastTransitionTime":"2026-01-29T15:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.313099 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.313147 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.313157 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.313176 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.313188 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:41Z","lastTransitionTime":"2026-01-29T15:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.398014 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:41 crc kubenswrapper[4757]: E0129 15:11:41.398172 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.415851 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.415883 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.415895 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.415910 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.415922 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:41Z","lastTransitionTime":"2026-01-29T15:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.417497 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:27:25.900928019 +0000 UTC Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.519355 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.519402 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.519413 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.519432 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.519451 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:41Z","lastTransitionTime":"2026-01-29T15:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.621615 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.621643 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.621651 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.621665 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.621675 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:41Z","lastTransitionTime":"2026-01-29T15:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.724638 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.724694 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.724712 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.724733 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.724745 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:41Z","lastTransitionTime":"2026-01-29T15:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.827086 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.827124 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.827133 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.827149 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.827160 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:41Z","lastTransitionTime":"2026-01-29T15:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.929588 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.929647 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.929658 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.929671 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:41 crc kubenswrapper[4757]: I0129 15:11:41.929681 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:41Z","lastTransitionTime":"2026-01-29T15:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.036240 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.036306 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.036320 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.036338 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.036350 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:42Z","lastTransitionTime":"2026-01-29T15:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.139031 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.139076 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.139085 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.139103 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.139114 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:42Z","lastTransitionTime":"2026-01-29T15:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.241729 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.241778 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.241792 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.241811 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.241827 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:42Z","lastTransitionTime":"2026-01-29T15:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.344127 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.344205 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.344219 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.344236 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.344247 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:42Z","lastTransitionTime":"2026-01-29T15:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.395745 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.395773 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.395762 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:42 crc kubenswrapper[4757]: E0129 15:11:42.395874 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:42 crc kubenswrapper[4757]: E0129 15:11:42.396001 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:42 crc kubenswrapper[4757]: E0129 15:11:42.396082 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.417954 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:04:28.839007993 +0000 UTC Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.446703 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.446748 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.446758 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.446772 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.446781 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:42Z","lastTransitionTime":"2026-01-29T15:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.549738 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.549770 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.549783 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.549798 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.549810 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:42Z","lastTransitionTime":"2026-01-29T15:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.651706 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.651740 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.651750 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.651763 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.651772 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:42Z","lastTransitionTime":"2026-01-29T15:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.753847 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.753895 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.753904 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.753916 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.753925 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:42Z","lastTransitionTime":"2026-01-29T15:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.855782 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.855816 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.855827 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.855844 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.855854 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:42Z","lastTransitionTime":"2026-01-29T15:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.957685 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.957714 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.957722 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.957737 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:42 crc kubenswrapper[4757]: I0129 15:11:42.957745 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:42Z","lastTransitionTime":"2026-01-29T15:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.061194 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.061243 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.061253 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.061290 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.061304 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:43Z","lastTransitionTime":"2026-01-29T15:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.163943 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.164619 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.164632 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.164646 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.164655 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:43Z","lastTransitionTime":"2026-01-29T15:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.266484 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.266531 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.266539 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.266551 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.266561 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:43Z","lastTransitionTime":"2026-01-29T15:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.368590 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.368625 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.368633 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.368646 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.368655 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:43Z","lastTransitionTime":"2026-01-29T15:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.396062 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:43 crc kubenswrapper[4757]: E0129 15:11:43.396154 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.396718 4757 scope.go:117] "RemoveContainer" containerID="068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280" Jan 29 15:11:43 crc kubenswrapper[4757]: E0129 15:11:43.396962 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.418203 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:30:04.002643855 +0000 UTC Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.471579 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.471614 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.471625 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.471641 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.471654 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:43Z","lastTransitionTime":"2026-01-29T15:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.574529 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.574582 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.574599 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.574621 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.574638 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:43Z","lastTransitionTime":"2026-01-29T15:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.679342 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.679378 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.679387 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.679402 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.679413 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:43Z","lastTransitionTime":"2026-01-29T15:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.781529 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.781568 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.781579 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.781596 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.781608 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:43Z","lastTransitionTime":"2026-01-29T15:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.883792 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.883827 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.883840 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.883853 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.883861 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:43Z","lastTransitionTime":"2026-01-29T15:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.986006 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.986055 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.986067 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.986085 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:43 crc kubenswrapper[4757]: I0129 15:11:43.986101 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:43Z","lastTransitionTime":"2026-01-29T15:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.089140 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.089177 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.089191 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.089205 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.089216 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:44Z","lastTransitionTime":"2026-01-29T15:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.192692 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.192743 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.192758 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.192778 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.192791 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:44Z","lastTransitionTime":"2026-01-29T15:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.295663 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.295700 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.295711 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.295727 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.295738 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:44Z","lastTransitionTime":"2026-01-29T15:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.395159 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.395199 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.395218 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:44 crc kubenswrapper[4757]: E0129 15:11:44.395309 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:44 crc kubenswrapper[4757]: E0129 15:11:44.395434 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:44 crc kubenswrapper[4757]: E0129 15:11:44.395517 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.398591 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.398664 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.398676 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.398692 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.398701 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:44Z","lastTransitionTime":"2026-01-29T15:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.419129 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 12:02:16.678537225 +0000 UTC Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.501804 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.501936 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.501947 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.501960 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.501968 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:44Z","lastTransitionTime":"2026-01-29T15:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.605185 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.605224 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.605235 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.605251 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.605263 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:44Z","lastTransitionTime":"2026-01-29T15:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.709353 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.709394 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.709418 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.709439 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.709454 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:44Z","lastTransitionTime":"2026-01-29T15:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.811572 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.811607 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.811615 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.811627 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.811637 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:44Z","lastTransitionTime":"2026-01-29T15:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.913527 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.913559 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.913568 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.913580 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:44 crc kubenswrapper[4757]: I0129 15:11:44.913590 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:44Z","lastTransitionTime":"2026-01-29T15:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.015787 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.015824 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.015833 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.015848 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.015858 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.117682 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.117710 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.117719 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.117731 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.117740 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.219349 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.219391 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.219404 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.219420 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.219432 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.225250 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.225297 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.225309 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.225322 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.225333 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: E0129 15:11:45.238515 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.243205 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.243229 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.243238 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.243252 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.243260 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: E0129 15:11:45.255055 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.257698 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.257719 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.257727 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.257739 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.257748 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: E0129 15:11:45.266822 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.269382 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.269404 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.269413 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.269426 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.269434 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: E0129 15:11:45.282248 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.285408 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.285443 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.285457 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.285473 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.285485 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: E0129 15:11:45.298247 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:45 crc kubenswrapper[4757]: E0129 15:11:45.298421 4757 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.322262 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.322344 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.322363 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.322407 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.322425 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.395897 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:45 crc kubenswrapper[4757]: E0129 15:11:45.396053 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.420048 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:26:39.885203521 +0000 UTC Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.425355 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.425388 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.425401 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.425419 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.425432 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.527500 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.527536 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.527545 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.527558 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.527567 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.606374 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:45 crc kubenswrapper[4757]: E0129 15:11:45.606597 4757 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:45 crc kubenswrapper[4757]: E0129 15:11:45.606683 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs podName:8c722d3b-1755-4633-967e-35591890a231 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:17.606663889 +0000 UTC m=+100.895914126 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs") pod "network-metrics-daemon-drtf8" (UID: "8c722d3b-1755-4633-967e-35591890a231") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.630130 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.630169 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.630177 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.630191 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.630200 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.732882 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.732921 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.732932 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.732950 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.732963 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.835663 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.835715 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.835726 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.835742 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.835751 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.937696 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.937738 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.937754 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.937771 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:45 crc kubenswrapper[4757]: I0129 15:11:45.937783 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:45Z","lastTransitionTime":"2026-01-29T15:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.040066 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.040135 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.040148 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.040165 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.040176 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:46Z","lastTransitionTime":"2026-01-29T15:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.143215 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.143241 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.143249 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.143288 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.143299 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:46Z","lastTransitionTime":"2026-01-29T15:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.245166 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.245204 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.245218 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.245234 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.245244 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:46Z","lastTransitionTime":"2026-01-29T15:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.347889 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.347924 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.347935 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.347950 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.347960 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:46Z","lastTransitionTime":"2026-01-29T15:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.395522 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.395559 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:46 crc kubenswrapper[4757]: E0129 15:11:46.395647 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.395534 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:46 crc kubenswrapper[4757]: E0129 15:11:46.395776 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:46 crc kubenswrapper[4757]: E0129 15:11:46.395949 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.420224 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 20:09:19.009765202 +0000 UTC Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.450239 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.450306 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.450322 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.450343 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.450360 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:46Z","lastTransitionTime":"2026-01-29T15:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.553162 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.553198 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.553209 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.553225 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.553235 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:46Z","lastTransitionTime":"2026-01-29T15:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.655581 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.655652 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.655668 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.655683 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.655717 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:46Z","lastTransitionTime":"2026-01-29T15:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.757767 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.757800 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.757809 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.757821 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.757830 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:46Z","lastTransitionTime":"2026-01-29T15:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.861085 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.861134 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.861147 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.861166 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.861181 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:46Z","lastTransitionTime":"2026-01-29T15:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.963589 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.963612 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.963621 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.963633 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:46 crc kubenswrapper[4757]: I0129 15:11:46.963642 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:46Z","lastTransitionTime":"2026-01-29T15:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.066193 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.066300 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.066317 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.066337 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.066347 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:47Z","lastTransitionTime":"2026-01-29T15:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.168339 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.168380 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.168391 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.168408 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.168419 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:47Z","lastTransitionTime":"2026-01-29T15:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.270729 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.270774 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.270784 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.270800 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.270813 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:47Z","lastTransitionTime":"2026-01-29T15:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.372983 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.373026 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.373039 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.373057 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.373070 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:47Z","lastTransitionTime":"2026-01-29T15:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.395236 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:47 crc kubenswrapper[4757]: E0129 15:11:47.395373 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.408685 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.419968 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.421016 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:49:38.14446859 +0000 UTC Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.430518 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.441657 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cc9943-8670-4bdc-a5c0-b7f5260603f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ea7daad0c4fdedbde11ce3aefb6b535e868f6af94052fae4e04ab8cb9192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6396476b582de4682f63061b8c38324ea5e530d9227f520bad41f5fab1e3fa50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9b463efd0451a165de686ba00645d799cddbf305a6f7227954dac78bbfe53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.464087 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.474296 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.474338 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.474350 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.474366 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.474379 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:47Z","lastTransitionTime":"2026-01-29T15:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.482912 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.495984 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.517380 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.529154 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.540523 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.550218 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.560946 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.572020 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.575971 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.575995 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.576003 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.576015 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.576026 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:47Z","lastTransitionTime":"2026-01-29T15:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.585969 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.598684 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.613656 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.632490 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:27Z\\\",\\\"message\\\":\\\"c3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:11:27.604186 6329 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b05b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.677499 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.677537 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.677550 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.677569 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.677580 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:47Z","lastTransitionTime":"2026-01-29T15:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.780036 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.780069 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.780078 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.780092 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.780101 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:47Z","lastTransitionTime":"2026-01-29T15:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.882534 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.882603 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.882615 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.882628 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.882637 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:47Z","lastTransitionTime":"2026-01-29T15:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.984614 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.984651 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.984660 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.984673 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:47 crc kubenswrapper[4757]: I0129 15:11:47.984683 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:47Z","lastTransitionTime":"2026-01-29T15:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.086828 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.086889 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.086906 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.086941 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.086958 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:48Z","lastTransitionTime":"2026-01-29T15:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.189658 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.189698 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.189706 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.189720 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.189729 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:48Z","lastTransitionTime":"2026-01-29T15:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.293285 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.293322 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.293335 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.293351 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.293360 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:48Z","lastTransitionTime":"2026-01-29T15:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.395223 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.395280 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.395293 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:48 crc kubenswrapper[4757]: E0129 15:11:48.395339 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:48 crc kubenswrapper[4757]: E0129 15:11:48.395453 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.395487 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:48 crc kubenswrapper[4757]: E0129 15:11:48.395509 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.395572 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.395589 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.395605 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.395615 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:48Z","lastTransitionTime":"2026-01-29T15:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.421889 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 06:22:57.924695531 +0000 UTC Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.497571 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.497610 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.497621 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.497638 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.497647 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:48Z","lastTransitionTime":"2026-01-29T15:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.600060 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.600320 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.600422 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.600516 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.600592 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:48Z","lastTransitionTime":"2026-01-29T15:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.703319 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.703351 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.703373 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.703386 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.703395 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:48Z","lastTransitionTime":"2026-01-29T15:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.806454 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.806486 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.806495 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.806509 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.806518 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:48Z","lastTransitionTime":"2026-01-29T15:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.908417 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.908467 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.908481 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.908497 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:48 crc kubenswrapper[4757]: I0129 15:11:48.908512 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:48Z","lastTransitionTime":"2026-01-29T15:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.011034 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.011083 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.011092 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.011113 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.011126 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:49Z","lastTransitionTime":"2026-01-29T15:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.113708 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.113756 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.113768 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.113786 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.113797 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:49Z","lastTransitionTime":"2026-01-29T15:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.216131 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.216169 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.216180 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.216195 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.216207 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:49Z","lastTransitionTime":"2026-01-29T15:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.318632 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.318671 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.318682 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.318701 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.318711 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:49Z","lastTransitionTime":"2026-01-29T15:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.396225 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:49 crc kubenswrapper[4757]: E0129 15:11:49.396379 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.420909 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.420948 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.420958 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.421106 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.421122 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:49Z","lastTransitionTime":"2026-01-29T15:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.422012 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:59:40.727116989 +0000 UTC Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.523069 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.523105 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.523113 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.523128 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.523139 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:49Z","lastTransitionTime":"2026-01-29T15:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.624772 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.624812 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.624824 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.624844 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.624856 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:49Z","lastTransitionTime":"2026-01-29T15:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.727281 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.727325 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.727337 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.727355 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.727371 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:49Z","lastTransitionTime":"2026-01-29T15:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.825823 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bcbdt_fe6866d7-5a43-46d5-ba84-264847f9cd30/kube-multus/0.log" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.825864 4757 generic.go:334] "Generic (PLEG): container finished" podID="fe6866d7-5a43-46d5-ba84-264847f9cd30" containerID="8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2" exitCode=1 Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.825887 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bcbdt" event={"ID":"fe6866d7-5a43-46d5-ba84-264847f9cd30","Type":"ContainerDied","Data":"8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2"} Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.826176 4757 scope.go:117] "RemoveContainer" containerID="8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.830347 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.830381 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.830392 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.830406 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.830415 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:49Z","lastTransitionTime":"2026-01-29T15:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.840871 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:49Z\\\",\\\"message\\\":\\\"2026-01-29T15:11:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c\\\\n2026-01-29T15:11:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c to /host/opt/cni/bin/\\\\n2026-01-29T15:11:04Z [verbose] multus-daemon started\\\\n2026-01-29T15:11:04Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:11:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.867685 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.889670 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:27Z\\\",\\\"message\\\":\\\"c3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:11:27.604186 6329 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b05b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.903423 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.914666 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.925090 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.935260 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.935313 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.935324 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.935340 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.935638 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:49Z","lastTransitionTime":"2026-01-29T15:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.937118 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cc9943-8670-4bdc-a5c0-b7f5260603f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ea7daad0c4fdedbde11ce3aefb6b535e868f6af94052fae4e04ab8cb9192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6396476b582de4682f63061b8c38324ea5e530d9227f520bad41f5fab1e3fa50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9b463efd0451a165de686ba00645d799cddbf305a6f7227954dac78bbfe53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.949748 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.961052 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.972303 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:49 crc kubenswrapper[4757]: I0129 15:11:49.986694 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.000654 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.012993 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.032996 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.037766 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.037813 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.037839 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.037867 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.037880 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:50Z","lastTransitionTime":"2026-01-29T15:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.042860 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.059600 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.072752 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.139887 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.139939 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.139949 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.139964 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.139974 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:50Z","lastTransitionTime":"2026-01-29T15:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.242941 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.242985 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.242997 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.243018 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.243030 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:50Z","lastTransitionTime":"2026-01-29T15:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.345353 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.345395 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.345405 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.345422 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.345433 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:50Z","lastTransitionTime":"2026-01-29T15:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.396162 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.396222 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.396409 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:50 crc kubenswrapper[4757]: E0129 15:11:50.396543 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:50 crc kubenswrapper[4757]: E0129 15:11:50.396689 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:50 crc kubenswrapper[4757]: E0129 15:11:50.396790 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.422285 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:15:29.162853118 +0000 UTC Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.451469 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.451497 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.451508 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.451527 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.451536 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:50Z","lastTransitionTime":"2026-01-29T15:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.553436 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.553489 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.553501 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.553519 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.553533 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:50Z","lastTransitionTime":"2026-01-29T15:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.655355 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.655652 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.655729 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.655808 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.655884 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:50Z","lastTransitionTime":"2026-01-29T15:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.758303 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.758353 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.758367 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.758386 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.758399 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:50Z","lastTransitionTime":"2026-01-29T15:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.830190 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bcbdt_fe6866d7-5a43-46d5-ba84-264847f9cd30/kube-multus/0.log" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.830321 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bcbdt" event={"ID":"fe6866d7-5a43-46d5-ba84-264847f9cd30","Type":"ContainerStarted","Data":"06723594ec631b4e23ea44dab6453e705a548052738d6da15ae230b788e10933"} Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.850531 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.860904 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.860945 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.860956 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.860975 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.860987 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:50Z","lastTransitionTime":"2026-01-29T15:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.866473 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06723594ec631b4e23ea44dab6453e705a548052738d6da15ae230b788e10933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:49Z\\\",\\\"message\\\":\\\"2026-01-29T15:11:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c\\\\n2026-01-29T15:11:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c to /host/opt/cni/bin/\\\\n2026-01-29T15:11:04Z [verbose] multus-daemon started\\\\n2026-01-29T15:11:04Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:11:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.880100 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.899767 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:27Z\\\",\\\"message\\\":\\\"c3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:11:27.604186 6329 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b05b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.914568 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.926693 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.939423 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.952019 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.963095 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.963125 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.963137 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.963164 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.963175 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:50Z","lastTransitionTime":"2026-01-29T15:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.963204 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cc9943-8670-4bdc-a5c0-b7f5260603f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ea7daad0c4fdedbde11ce3aefb6b535e868f6af94052fae4e04ab8cb9192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6396476b582de4682f63061b8c38324ea5e530d9227f520bad41f5fab1e3fa50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9b463efd0451a165de686ba00645d799cddbf305a6f7227954dac78bbfe53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.977640 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:50 crc kubenswrapper[4757]: I0129 15:11:50.986174 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.000367 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.013951 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.029537 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.041083 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.054029 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.065739 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.065785 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.065798 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.065814 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.065826 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:51Z","lastTransitionTime":"2026-01-29T15:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.067451 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.169444 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.169536 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.169568 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.169587 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.169600 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:51Z","lastTransitionTime":"2026-01-29T15:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.271780 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.271820 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.271829 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.271843 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.271852 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:51Z","lastTransitionTime":"2026-01-29T15:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.375044 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.375088 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.375100 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.375116 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.375129 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:51Z","lastTransitionTime":"2026-01-29T15:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.395377 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:51 crc kubenswrapper[4757]: E0129 15:11:51.395533 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.423405 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 00:21:51.709079604 +0000 UTC Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.477727 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.477764 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.477773 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.477788 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.477798 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:51Z","lastTransitionTime":"2026-01-29T15:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.579902 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.579966 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.579977 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.580013 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.580034 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:51Z","lastTransitionTime":"2026-01-29T15:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.682303 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.682344 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.682356 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.682391 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.682401 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:51Z","lastTransitionTime":"2026-01-29T15:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.784642 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.784696 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.784708 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.784726 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.784739 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:51Z","lastTransitionTime":"2026-01-29T15:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.886555 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.886597 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.886607 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.886621 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.886630 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:51Z","lastTransitionTime":"2026-01-29T15:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.988967 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.989055 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.989071 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.989102 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:51 crc kubenswrapper[4757]: I0129 15:11:51.989114 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:51Z","lastTransitionTime":"2026-01-29T15:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.091552 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.091587 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.091596 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.091610 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.091621 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:52Z","lastTransitionTime":"2026-01-29T15:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.195431 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.195496 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.195520 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.195546 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.195559 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:52Z","lastTransitionTime":"2026-01-29T15:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.298551 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.298589 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.298600 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.298614 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.298623 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:52Z","lastTransitionTime":"2026-01-29T15:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.395763 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.395835 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.396065 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:52 crc kubenswrapper[4757]: E0129 15:11:52.396336 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:52 crc kubenswrapper[4757]: E0129 15:11:52.396635 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:52 crc kubenswrapper[4757]: E0129 15:11:52.396678 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.401934 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.401982 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.401993 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.402010 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.402022 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:52Z","lastTransitionTime":"2026-01-29T15:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.423875 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:28:39.195455881 +0000 UTC Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.505374 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.505456 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.505479 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.505511 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.505533 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:52Z","lastTransitionTime":"2026-01-29T15:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.609498 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.609559 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.609576 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.609601 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.609619 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:52Z","lastTransitionTime":"2026-01-29T15:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.712736 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.712805 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.712819 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.712842 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.712862 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:52Z","lastTransitionTime":"2026-01-29T15:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.815166 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.815211 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.815222 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.815240 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.815254 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:52Z","lastTransitionTime":"2026-01-29T15:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.918059 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.918136 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.918150 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.918167 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:52 crc kubenswrapper[4757]: I0129 15:11:52.918178 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:52Z","lastTransitionTime":"2026-01-29T15:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.020791 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.020843 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.020853 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.020869 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.020880 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:53Z","lastTransitionTime":"2026-01-29T15:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.124042 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.124098 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.124122 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.124155 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.124179 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:53Z","lastTransitionTime":"2026-01-29T15:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.227492 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.227548 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.227577 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.227609 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.227632 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:53Z","lastTransitionTime":"2026-01-29T15:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.331421 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.331474 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.331487 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.331503 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.331517 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:53Z","lastTransitionTime":"2026-01-29T15:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.396100 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:53 crc kubenswrapper[4757]: E0129 15:11:53.396309 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.424521 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 11:38:53.73244116 +0000 UTC Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.434540 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.434595 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.434611 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.434634 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.434652 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:53Z","lastTransitionTime":"2026-01-29T15:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.537425 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.537495 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.537518 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.537544 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.537563 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:53Z","lastTransitionTime":"2026-01-29T15:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.640081 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.640137 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.640153 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.640177 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.640194 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:53Z","lastTransitionTime":"2026-01-29T15:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.743576 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.743642 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.743660 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.743687 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.743705 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:53Z","lastTransitionTime":"2026-01-29T15:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.846059 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.846124 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.846143 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.846165 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.846182 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:53Z","lastTransitionTime":"2026-01-29T15:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.949342 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.949383 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.949392 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.949408 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:53 crc kubenswrapper[4757]: I0129 15:11:53.949420 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:53Z","lastTransitionTime":"2026-01-29T15:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.051897 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.052004 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.052036 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.052067 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.052092 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:54Z","lastTransitionTime":"2026-01-29T15:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.155878 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.155928 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.155938 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.155953 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.155964 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:54Z","lastTransitionTime":"2026-01-29T15:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.263212 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.263308 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.263332 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.263359 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.263383 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:54Z","lastTransitionTime":"2026-01-29T15:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.367488 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.367561 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.367580 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.367607 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.367626 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:54Z","lastTransitionTime":"2026-01-29T15:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.395885 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.396001 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.396048 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:54 crc kubenswrapper[4757]: E0129 15:11:54.396260 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:54 crc kubenswrapper[4757]: E0129 15:11:54.396428 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:54 crc kubenswrapper[4757]: E0129 15:11:54.396538 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.425705 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:33:46.964910317 +0000 UTC Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.470642 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.470733 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.470751 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.470779 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.470799 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:54Z","lastTransitionTime":"2026-01-29T15:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.575744 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.575783 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.575791 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.575805 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.575814 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:54Z","lastTransitionTime":"2026-01-29T15:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.679128 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.679221 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.679241 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.679366 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.679621 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:54Z","lastTransitionTime":"2026-01-29T15:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.784682 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.784726 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.784738 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.784755 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.784767 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:54Z","lastTransitionTime":"2026-01-29T15:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.886859 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.886918 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.886929 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.886949 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.886966 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:54Z","lastTransitionTime":"2026-01-29T15:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.991054 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.991129 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.991153 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.991181 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:54 crc kubenswrapper[4757]: I0129 15:11:54.991202 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:54Z","lastTransitionTime":"2026-01-29T15:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.094605 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.094652 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.094666 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.094685 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.094699 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.197246 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.197304 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.197313 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.197326 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.197335 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.300068 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.300119 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.300135 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.300153 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.300165 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.307828 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.307895 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.307912 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.307954 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.307966 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: E0129 15:11:55.327805 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.332452 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.332500 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.332512 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.332534 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.332548 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: E0129 15:11:55.350378 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.356751 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.356814 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.356840 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.356874 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.356899 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: E0129 15:11:55.381410 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.388317 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.388385 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.388403 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.388427 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.388444 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.396606 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.396690 4757 scope.go:117] "RemoveContainer" containerID="068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280" Jan 29 15:11:55 crc kubenswrapper[4757]: E0129 15:11:55.396821 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:55 crc kubenswrapper[4757]: E0129 15:11:55.417411 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.421859 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.421898 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.421907 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.421924 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.421940 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.426319 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 18:35:07.927978781 +0000 UTC Jan 29 15:11:55 crc kubenswrapper[4757]: E0129 15:11:55.443721 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:55 crc kubenswrapper[4757]: E0129 15:11:55.443903 4757 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.446599 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.446638 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.446653 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.446671 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.446685 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.549122 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.549148 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.549159 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.549174 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.549184 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.653971 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.654047 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.654073 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.654107 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.654132 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.756939 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.757005 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.757030 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.757054 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.757071 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.848989 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/2.log" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.852103 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerStarted","Data":"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee"} Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.852614 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.859808 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.859845 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.859856 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.859871 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.859882 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.870091 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.893481 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.909480 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.928707 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.943005 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.962328 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.962391 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.962404 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.962451 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.962462 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:55Z","lastTransitionTime":"2026-01-29T15:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.967805 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.982593 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:55 crc kubenswrapper[4757]: I0129 15:11:55.994213 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06723594ec631b4e23ea44dab6453e705a548052738d6da15ae230b788e10933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:49Z\\\",\\\"message\\\":\\\"2026-01-29T15:11:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c\\\\n2026-01-29T15:11:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c to /host/opt/cni/bin/\\\\n2026-01-29T15:11:04Z [verbose] multus-daemon started\\\\n2026-01-29T15:11:04Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:11:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:55Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.006469 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.022326 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:27Z\\\",\\\"message\\\":\\\"c3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:11:27.604186 6329 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b05b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.034112 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.043432 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.055625 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.064180 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.064237 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.064249 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.064299 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.064316 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:56Z","lastTransitionTime":"2026-01-29T15:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.067341 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cc9943-8670-4bdc-a5c0-b7f5260603f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ea7daad0c4fdedbde11ce3aefb6b535e868f6af94052fae4e04ab8cb9192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6396476b582de4682f63061b8c38324ea5e530d9227f520bad41f5fab1e3fa50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9b463efd0451a165de686ba00645d799cddbf305a6f7227954dac78bbfe53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.082044 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.096443 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.106255 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.166474 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.166536 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.166547 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.166562 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.166573 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:56Z","lastTransitionTime":"2026-01-29T15:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.269072 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.269108 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.269118 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.269132 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.269143 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:56Z","lastTransitionTime":"2026-01-29T15:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.371134 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.371181 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.371194 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.371213 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.371228 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:56Z","lastTransitionTime":"2026-01-29T15:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.395450 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.395450 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.395672 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:56 crc kubenswrapper[4757]: E0129 15:11:56.395867 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:56 crc kubenswrapper[4757]: E0129 15:11:56.396019 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:56 crc kubenswrapper[4757]: E0129 15:11:56.396174 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.427122 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:30:50.511700191 +0000 UTC Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.475089 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.475126 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.475136 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.475154 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.475164 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:56Z","lastTransitionTime":"2026-01-29T15:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.578770 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.578817 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.578841 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.578860 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.578874 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:56Z","lastTransitionTime":"2026-01-29T15:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.682113 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.682150 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.682182 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.682200 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.682211 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:56Z","lastTransitionTime":"2026-01-29T15:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.785079 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.785164 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.785196 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.785228 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.785249 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:56Z","lastTransitionTime":"2026-01-29T15:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.857590 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/3.log" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.858883 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/2.log" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.862933 4757 generic.go:334] "Generic (PLEG): container finished" podID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerID="5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee" exitCode=1 Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.862984 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee"} Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.863034 4757 scope.go:117] "RemoveContainer" containerID="068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.864109 4757 scope.go:117] "RemoveContainer" containerID="5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee" Jan 29 15:11:56 crc kubenswrapper[4757]: E0129 15:11:56.864525 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.884889 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.888175 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.888202 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.888213 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.888230 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.888242 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:56Z","lastTransitionTime":"2026-01-29T15:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.906465 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.926192 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.943746 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.957684 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.971050 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.988838 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.990522 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.990571 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.990588 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.990612 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:56 crc kubenswrapper[4757]: I0129 15:11:56.990632 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:56Z","lastTransitionTime":"2026-01-29T15:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.009854 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06723594ec631b4e23ea44dab6453e705a548052738d6da15ae230b788e10933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:49Z\\\",\\\"message\\\":\\\"2026-01-29T15:11:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c\\\\n2026-01-29T15:11:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c to /host/opt/cni/bin/\\\\n2026-01-29T15:11:04Z [verbose] multus-daemon started\\\\n2026-01-29T15:11:04Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:11:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.034594 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.061447 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:27Z\\\",\\\"message\\\":\\\"c3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:11:27.604186 6329 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b05b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:56Z\\\",\\\"message\\\":\\\"40288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00747cf67 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0129 15:11:56.344728 6715 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.076618 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.086807 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.093685 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.093721 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.093732 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.093747 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.093757 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:57Z","lastTransitionTime":"2026-01-29T15:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.098527 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.109874 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.124361 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cc9943-8670-4bdc-a5c0-b7f5260603f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ea7daad0c4fdedbde11ce3aefb6b535e868f6af94052fae4e04ab8cb9192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6396476b582de4682f63061b8c38324ea5e530d9227f520bad41f5fab1e3fa50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9b463efd0451a165de686ba00645d799cddbf305a6f7227954dac78bbfe53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.137458 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.149469 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.196689 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.196755 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.196779 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.196802 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.196858 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:57Z","lastTransitionTime":"2026-01-29T15:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.300324 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.300416 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.300445 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.300491 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.300517 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:57Z","lastTransitionTime":"2026-01-29T15:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.395618 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:57 crc kubenswrapper[4757]: E0129 15:11:57.395779 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.403510 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.403572 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.403591 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.403619 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.403642 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:57Z","lastTransitionTime":"2026-01-29T15:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.418635 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.427972 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:16:15.990141692 +0000 UTC Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.437903 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.452799 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.463833 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.484643 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.503169 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.505897 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.505931 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.505941 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.505956 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.505967 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:57Z","lastTransitionTime":"2026-01-29T15:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.517040 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06723594ec631b4e23ea44dab6453e705a548052738d6da15ae230b788e10933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:49Z\\\",\\\"message\\\":\\\"2026-01-29T15:11:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c\\\\n2026-01-29T15:11:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c to /host/opt/cni/bin/\\\\n2026-01-29T15:11:04Z [verbose] multus-daemon started\\\\n2026-01-29T15:11:04Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:11:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.535979 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.559010 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://068d44de664f08238a950818d55a58ba5db108661e28b9152a194ba625c0c280\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:27Z\\\",\\\"message\\\":\\\"c3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:27Z is after 2025-08-24T17:21:41Z]\\\\nI0129 15:11:27.604186 6329 services_controller.go:434] Service openshift-etcd-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-etcd-operator ff1da138-ae82-4792-ae1f-3b2df1427723 4289 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:etcd-operator] map[include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:etcd-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0075b05b7 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:56Z\\\",\\\"message\\\":\\\"40288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00747cf67 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0129 15:11:56.344728 6715 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.575252 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.589942 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.599725 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.608939 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.609002 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.609015 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.609032 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.609044 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:57Z","lastTransitionTime":"2026-01-29T15:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.613333 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cc9943-8670-4bdc-a5c0-b7f5260603f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ea7daad0c4fdedbde11ce3aefb6b535e868f6af94052fae4e04ab8cb9192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6396476b582de4682f63061b8c38324ea5e530d9227f520bad41f5fab1e3fa50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9b463efd0451a165de686ba00645d799cddbf305a6f7227954dac78bbfe53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.629416 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.641382 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.653101 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.665295 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.710908 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.710945 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.710957 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.710973 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.710982 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:57Z","lastTransitionTime":"2026-01-29T15:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.813344 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.813402 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.813419 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.813443 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.813460 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:57Z","lastTransitionTime":"2026-01-29T15:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.870341 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/3.log" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.874242 4757 scope.go:117] "RemoveContainer" containerID="5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee" Jan 29 15:11:57 crc kubenswrapper[4757]: E0129 15:11:57.874496 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.886296 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.895914 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.911962 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.916098 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.916137 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.916152 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.916171 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.916185 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:57Z","lastTransitionTime":"2026-01-29T15:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.925076 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cc9943-8670-4bdc-a5c0-b7f5260603f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ea7daad0c4fdedbde11ce3aefb6b535e868f6af94052fae4e04ab8cb9192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6396476b582de4682f63061b8c38324ea5e530d9227f520bad41f5fab1e3fa50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9b463efd0451a165de686ba00645d799cddbf305a6f7227954dac78bbfe53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.936509 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.946832 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.956151 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.966577 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.978085 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:57 crc kubenswrapper[4757]: I0129 15:11:57.990724 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.000148 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:57Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.009054 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.018046 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.018087 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.018119 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.018131 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.018147 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.018166 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:58Z","lastTransitionTime":"2026-01-29T15:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.032816 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.045171 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06723594ec631b4e23ea44dab6453e705a548052738d6da15ae230b788e10933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:49Z\\\",\\\"message\\\":\\\"2026-01-29T15:11:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c\\\\n2026-01-29T15:11:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c to /host/opt/cni/bin/\\\\n2026-01-29T15:11:04Z [verbose] multus-daemon started\\\\n2026-01-29T15:11:04Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:11:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.058654 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.075515 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:56Z\\\",\\\"message\\\":\\\"40288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00747cf67 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0129 15:11:56.344728 6715 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:11:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.120304 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.120345 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.120358 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.120375 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.120388 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:58Z","lastTransitionTime":"2026-01-29T15:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.222876 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.222928 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.222940 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.222959 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.222973 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:58Z","lastTransitionTime":"2026-01-29T15:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.326133 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.326177 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.326188 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.326203 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.326216 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:58Z","lastTransitionTime":"2026-01-29T15:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.396107 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.396172 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.396200 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:11:58 crc kubenswrapper[4757]: E0129 15:11:58.396325 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:11:58 crc kubenswrapper[4757]: E0129 15:11:58.396457 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:11:58 crc kubenswrapper[4757]: E0129 15:11:58.396571 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.428196 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:56:49.454507003 +0000 UTC Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.429850 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.429889 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.429903 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.429922 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.429936 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:58Z","lastTransitionTime":"2026-01-29T15:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.532713 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.532805 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.532830 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.532855 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.532873 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:58Z","lastTransitionTime":"2026-01-29T15:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.635875 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.636004 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.636025 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.636049 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.636067 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:58Z","lastTransitionTime":"2026-01-29T15:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.739425 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.739463 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.739474 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.739488 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.739499 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:58Z","lastTransitionTime":"2026-01-29T15:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.841540 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.841594 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.841606 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.841620 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.841630 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:58Z","lastTransitionTime":"2026-01-29T15:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.945856 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.946035 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.946063 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.946144 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:58 crc kubenswrapper[4757]: I0129 15:11:58.946178 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:58Z","lastTransitionTime":"2026-01-29T15:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.050060 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.050177 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.050201 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.050223 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.050241 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:59Z","lastTransitionTime":"2026-01-29T15:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.153699 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.153871 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.153893 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.153963 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.153994 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:59Z","lastTransitionTime":"2026-01-29T15:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.257050 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.257087 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.257098 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.257116 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.257130 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:59Z","lastTransitionTime":"2026-01-29T15:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.360666 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.360713 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.360724 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.360740 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.360753 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:59Z","lastTransitionTime":"2026-01-29T15:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.395319 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:11:59 crc kubenswrapper[4757]: E0129 15:11:59.395530 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.428854 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:11:14.301963112 +0000 UTC Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.463858 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.463925 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.463943 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.463966 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.463984 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:59Z","lastTransitionTime":"2026-01-29T15:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.566905 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.566964 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.566972 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.566990 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.567023 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:59Z","lastTransitionTime":"2026-01-29T15:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.669513 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.669574 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.669587 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.669604 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.669615 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:59Z","lastTransitionTime":"2026-01-29T15:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.772525 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.772566 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.772579 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.772593 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.772602 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:59Z","lastTransitionTime":"2026-01-29T15:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.876433 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.876508 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.876526 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.876551 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.876569 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:59Z","lastTransitionTime":"2026-01-29T15:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.979186 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.979249 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.979258 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.979283 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:11:59 crc kubenswrapper[4757]: I0129 15:11:59.979293 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:11:59Z","lastTransitionTime":"2026-01-29T15:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.081172 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.081222 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.081236 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.081254 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.081280 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:00Z","lastTransitionTime":"2026-01-29T15:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.188479 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.188547 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.188565 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.188591 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.188609 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:00Z","lastTransitionTime":"2026-01-29T15:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.291564 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.291627 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.291645 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.291668 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.291686 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:00Z","lastTransitionTime":"2026-01-29T15:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.394643 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.394673 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.394682 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.394695 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.394703 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:00Z","lastTransitionTime":"2026-01-29T15:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.395859 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.395899 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.395919 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:00 crc kubenswrapper[4757]: E0129 15:12:00.395946 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:00 crc kubenswrapper[4757]: E0129 15:12:00.396060 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:00 crc kubenswrapper[4757]: E0129 15:12:00.396105 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.429312 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:25:25.048918344 +0000 UTC Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.497155 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.497214 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.497232 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.497256 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.497297 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:00Z","lastTransitionTime":"2026-01-29T15:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.600543 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.600680 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.600696 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.600720 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.600737 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:00Z","lastTransitionTime":"2026-01-29T15:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.715970 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.716072 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.716092 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.716121 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.716139 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:00Z","lastTransitionTime":"2026-01-29T15:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.819657 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.819716 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.819731 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.819751 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.819765 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:00Z","lastTransitionTime":"2026-01-29T15:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.921637 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.921673 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.921683 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.921699 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:00 crc kubenswrapper[4757]: I0129 15:12:00.921710 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:00Z","lastTransitionTime":"2026-01-29T15:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.024478 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.024519 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.024535 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.024555 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.024571 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:01Z","lastTransitionTime":"2026-01-29T15:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.127249 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.127301 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.127310 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.127323 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.127334 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:01Z","lastTransitionTime":"2026-01-29T15:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.230203 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.230241 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.230253 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.230297 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.230310 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:01Z","lastTransitionTime":"2026-01-29T15:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.333019 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.333066 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.333077 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.333094 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.333105 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:01Z","lastTransitionTime":"2026-01-29T15:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.395933 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:01 crc kubenswrapper[4757]: E0129 15:12:01.396147 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.429756 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:42:50.967093032 +0000 UTC Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.435979 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.436023 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.436041 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.436062 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.436078 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:01Z","lastTransitionTime":"2026-01-29T15:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.538606 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.538655 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.538670 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.538690 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.538707 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:01Z","lastTransitionTime":"2026-01-29T15:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.641530 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.641591 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.641614 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.641645 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.641670 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:01Z","lastTransitionTime":"2026-01-29T15:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.744841 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.744911 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.744931 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.744959 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.744980 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:01Z","lastTransitionTime":"2026-01-29T15:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.846872 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.846922 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.846936 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.846959 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.846974 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:01Z","lastTransitionTime":"2026-01-29T15:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.950481 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.950551 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.950569 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.950598 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:01 crc kubenswrapper[4757]: I0129 15:12:01.950612 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:01Z","lastTransitionTime":"2026-01-29T15:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.053910 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.053974 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.053991 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.054017 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.054035 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:02Z","lastTransitionTime":"2026-01-29T15:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.157699 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.157757 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.157776 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.157799 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.157816 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:02Z","lastTransitionTime":"2026-01-29T15:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.260782 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.260828 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.260844 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.260868 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.260884 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:02Z","lastTransitionTime":"2026-01-29T15:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.291326 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.291649 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.291619638 +0000 UTC m=+149.580869905 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.364098 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.364157 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.364175 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.364199 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.364216 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:02Z","lastTransitionTime":"2026-01-29T15:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.392667 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.392776 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.392863 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.392905 4757 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.392941 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.393023 4757 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.393067 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.392983364 +0000 UTC m=+149.682233641 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.393590 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.393567502 +0000 UTC m=+149.682817769 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.393136 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.393700 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.393725 4757 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.393138 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.393828 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.393769009 +0000 UTC m=+149.683019276 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.393831 4757 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.393865 4757 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.393939 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.393913413 +0000 UTC m=+149.683163690 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.395587 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.395676 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.395782 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.395814 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.395922 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:02 crc kubenswrapper[4757]: E0129 15:12:02.396007 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.430355 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:57:20.18967953 +0000 UTC Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.468231 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.468329 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.468347 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.468372 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.468392 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:02Z","lastTransitionTime":"2026-01-29T15:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.571475 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.571509 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.571519 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.571534 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.571543 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:02Z","lastTransitionTime":"2026-01-29T15:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.674490 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.674955 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.674982 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.675011 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.675036 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:02Z","lastTransitionTime":"2026-01-29T15:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.778157 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.778220 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.778238 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.778297 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.778316 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:02Z","lastTransitionTime":"2026-01-29T15:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.881045 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.881118 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.881138 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.881164 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.881183 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:02Z","lastTransitionTime":"2026-01-29T15:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.983789 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.983846 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.983866 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.983892 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:02 crc kubenswrapper[4757]: I0129 15:12:02.983910 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:02Z","lastTransitionTime":"2026-01-29T15:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.086908 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.086953 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.086963 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.086979 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.086990 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:03Z","lastTransitionTime":"2026-01-29T15:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.189231 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.189306 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.189322 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.189341 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.189353 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:03Z","lastTransitionTime":"2026-01-29T15:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.292363 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.292404 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.292419 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.292440 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.292461 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:03Z","lastTransitionTime":"2026-01-29T15:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.395491 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:03 crc kubenswrapper[4757]: E0129 15:12:03.395667 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.396609 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.396648 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.396663 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.396679 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.396703 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:03Z","lastTransitionTime":"2026-01-29T15:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.431394 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:26:16.212211165 +0000 UTC Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.499309 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.499349 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.499360 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.499376 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.499387 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:03Z","lastTransitionTime":"2026-01-29T15:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.602857 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.602928 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.602951 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.602981 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.603004 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:03Z","lastTransitionTime":"2026-01-29T15:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.707067 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.707156 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.707169 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.707223 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.707235 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:03Z","lastTransitionTime":"2026-01-29T15:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.811066 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.811103 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.811116 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.811132 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.811143 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:03Z","lastTransitionTime":"2026-01-29T15:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.914496 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.914564 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.914582 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.914608 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:03 crc kubenswrapper[4757]: I0129 15:12:03.914626 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:03Z","lastTransitionTime":"2026-01-29T15:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.017767 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.017900 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.017922 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.017947 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.017966 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:04Z","lastTransitionTime":"2026-01-29T15:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.121324 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.121399 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.121425 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.121457 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.121482 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:04Z","lastTransitionTime":"2026-01-29T15:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.225009 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.225069 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.225086 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.225109 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.225128 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:04Z","lastTransitionTime":"2026-01-29T15:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.327146 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.327185 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.327194 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.327209 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.327218 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:04Z","lastTransitionTime":"2026-01-29T15:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.395905 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.395937 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.396027 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:04 crc kubenswrapper[4757]: E0129 15:12:04.396161 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:04 crc kubenswrapper[4757]: E0129 15:12:04.396306 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:04 crc kubenswrapper[4757]: E0129 15:12:04.396396 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.429700 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.429742 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.429751 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.429765 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.429775 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:04Z","lastTransitionTime":"2026-01-29T15:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.431850 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:20:53.945676861 +0000 UTC Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.533021 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.533064 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.533076 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.533098 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.533110 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:04Z","lastTransitionTime":"2026-01-29T15:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.636291 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.636337 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.636355 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.636403 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.636443 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:04Z","lastTransitionTime":"2026-01-29T15:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.738799 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.738861 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.738878 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.738902 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.738919 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:04Z","lastTransitionTime":"2026-01-29T15:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.841920 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.841985 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.842000 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.842023 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.842041 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:04Z","lastTransitionTime":"2026-01-29T15:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.945106 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.945166 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.945178 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.945200 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:04 crc kubenswrapper[4757]: I0129 15:12:04.945220 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:04Z","lastTransitionTime":"2026-01-29T15:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.048504 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.048562 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.048581 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.048602 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.048620 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.151846 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.151891 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.151902 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.151920 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.151933 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.254997 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.255341 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.255439 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.255541 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.255641 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.358398 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.358458 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.358483 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.358511 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.358534 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.395844 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:05 crc kubenswrapper[4757]: E0129 15:12:05.396354 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.432668 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 16:58:15.146864652 +0000 UTC Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.460189 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.460236 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.460253 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.460293 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.460309 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.563193 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.563237 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.563247 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.563285 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.563297 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.665189 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.665250 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.665312 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.665337 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.665354 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.689094 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.689159 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.689184 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.689212 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.689232 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: E0129 15:12:05.712986 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.717761 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.717821 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.717835 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.717853 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.717865 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: E0129 15:12:05.736691 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.741416 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.741448 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.741459 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.741474 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.741485 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: E0129 15:12:05.756893 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.760828 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.760866 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.760878 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.760894 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.760905 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: E0129 15:12:05.774955 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.777772 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.777822 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.777832 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.777848 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.777858 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: E0129 15:12:05.788753 4757 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148064Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608864Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:12:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2b0cb187-65d3-4368-92b4-54568692447c\\\",\\\"systemUUID\\\":\\\"5f377355-ee96-4ac8-8c1b-9d23158e8b01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:05 crc kubenswrapper[4757]: E0129 15:12:05.788865 4757 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.790080 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.790105 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.790113 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.790128 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.790139 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.893133 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.893202 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.893219 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.893247 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.893289 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.995813 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.996324 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.996438 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.996538 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:05 crc kubenswrapper[4757]: I0129 15:12:05.996639 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:05Z","lastTransitionTime":"2026-01-29T15:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.099707 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.099791 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.099820 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.099852 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.099875 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:06Z","lastTransitionTime":"2026-01-29T15:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.203164 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.203227 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.203244 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.203311 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.203333 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:06Z","lastTransitionTime":"2026-01-29T15:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.306490 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.306569 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.306595 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.306625 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.306649 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:06Z","lastTransitionTime":"2026-01-29T15:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.395908 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.395972 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.395983 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:06 crc kubenswrapper[4757]: E0129 15:12:06.396112 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:06 crc kubenswrapper[4757]: E0129 15:12:06.396245 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:06 crc kubenswrapper[4757]: E0129 15:12:06.396470 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.410622 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.410684 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.410693 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.410718 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.410731 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:06Z","lastTransitionTime":"2026-01-29T15:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.433174 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 01:24:38.97057539 +0000 UTC Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.513749 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.513856 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.513876 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.513902 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.513920 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:06Z","lastTransitionTime":"2026-01-29T15:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.616177 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.616247 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.616317 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.616353 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.616376 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:06Z","lastTransitionTime":"2026-01-29T15:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.719636 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.719713 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.719732 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.719755 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.719774 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:06Z","lastTransitionTime":"2026-01-29T15:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.823166 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.823245 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.823299 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.823317 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.823334 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:06Z","lastTransitionTime":"2026-01-29T15:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.925658 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.925738 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.925761 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.925789 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:06 crc kubenswrapper[4757]: I0129 15:12:06.925812 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:06Z","lastTransitionTime":"2026-01-29T15:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.029737 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.029848 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.029872 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.029903 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.029930 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:07Z","lastTransitionTime":"2026-01-29T15:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.133487 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.133535 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.133545 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.133560 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.133569 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:07Z","lastTransitionTime":"2026-01-29T15:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.236918 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.236972 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.236984 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.237003 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.237014 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:07Z","lastTransitionTime":"2026-01-29T15:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.339678 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.339902 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.340015 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.340048 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.340066 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:07Z","lastTransitionTime":"2026-01-29T15:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.395493 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:07 crc kubenswrapper[4757]: E0129 15:12:07.395749 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.416338 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18611b4b-3eb0-4d3c-a9b1-1899616e8ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8edba3ca332ecbe447cdb9d9fc5ab2f3c07cf42253ebffa7fbc669e21b9789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e189245d7b997f49cf802b2b19920c9ea002c2fed94b5020b8dd5c8955e5007\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k9qk4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6v5r7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.434452 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:55:01.254028538 +0000 UTC Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.437950 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.445576 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.445614 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.445624 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.445640 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.445650 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:07Z","lastTransitionTime":"2026-01-29T15:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.463971 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a8cab8259849e33cf5934a5a92f9518b15d5977801d6b6e462cf818f4398c7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.481798 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.494315 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05f902e18070194804667c53334451d71abd5271ea88d145fb98ee7c9b9a9638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.511651 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f453676a-fbf0-4159-8a5a-04c0138b42c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07f068a9e7f6cb5911d29cb4004358baa004345123018289832451a5be2ad4db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-45q8t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.528037 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"341afdcd-2c99-472f-9792-0ddd254aeab2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 15:10:59.017703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 15:10:59.017861 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:10:59.019149 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2729941581/tls.crt::/tmp/serving-cert-2729941581/tls.key\\\\\\\"\\\\nI0129 15:10:59.546528 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 15:10:59.559379 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 15:10:59.559406 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 15:10:59.559666 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 15:10:59.559677 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 15:10:59.576793 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 15:10:59.576832 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576837 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 15:10:59.576847 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 15:10:59.576851 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 15:10:59.576855 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 15:10:59.576860 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 15:10:59.577147 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 15:10:59.580052 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.540213 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bcbdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe6866d7-5a43-46d5-ba84-264847f9cd30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06723594ec631b4e23ea44dab6453e705a548052738d6da15ae230b788e10933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:49Z\\\",\\\"message\\\":\\\"2026-01-29T15:11:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c\\\\n2026-01-29T15:11:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bcc6513c-ad5c-4fd6-8be7-13fb77a8dc4c to /host/opt/cni/bin/\\\\n2026-01-29T15:11:04Z [verbose] multus-daemon started\\\\n2026-01-29T15:11:04Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:11:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqjx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bcbdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.550776 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.550811 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.550819 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.550832 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.550841 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:07Z","lastTransitionTime":"2026-01-29T15:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.556309 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dxk67" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ad19a70-dd88-4323-b98b-ae01159e0c64\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51b3162370fef0e0542b623cc6779b6642109211555df78e82440a74fedf376b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d294691996cbba3f9c8ba1cc3f72135193300b99d936d4d90a9dbc8bc97a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89390d90478496be4d3868c95006a4c0e8539ce5a04d6684f02ef9a635388231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ccb35a8fc258ce96ce862d1f4743c9a24e1f4330b2d8504b1dbe237730e4bad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b40511a142e71f118d6229268081d6419231471bc1258e8ff33fd539723f840e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afa9d27b8fc5fece110dc6f32de0a9aa5663bc96eca3391deef447ea8a3e40de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6cf251774abc024a2e848fd7b8c85a7625f000e9cf8ca0db29465c1d7edc91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7xg7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dxk67\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.581719 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6815a1b-56eb-4075-84ae-1af5d0dcb742\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:11:56Z\\\",\\\"message\\\":\\\"40288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00747cf67 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:8383,TargetPort:{0 8383 },NodePort:0,AppProtocol:nil,},ServicePort{Name:https-metrics,Protocol:TCP,Port:8081,TargetPort:{0 8081 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: marketplace-operator,},ClusterIP:10.217.5.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0129 15:11:56.344728 6715 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5zhhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8fwvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.592404 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.602032 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-drtf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c722d3b-1755-4633-967e-35591890a231\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bcff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:11:13Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-drtf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.610403 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qxr9t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ac5eae5-5794-458e-b182-a3203b6638d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://564dfe8ff8875e5c6e52e70d1849fce5f72799b931963cfacb22cfa09888488e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:11:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2h4kv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qxr9t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.620158 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"50d18ec8-7ca2-409d-bc49-bb72046d3de5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7295d248d2c84ede566467b122c246f9939411d3a6c4777f5071d9201a44e29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec1e2e0dd9c0c84ce946c6f7134e26888b6c980c085935bfb93771609f22fdf0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0df5396fb0c535ee5bebae2cf05fb34edd917037573736b71c156517cf8dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.630313 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53cc9943-8670-4bdc-a5c0-b7f5260603f9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:11:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ea7daad0c4fdedbde11ce3aefb6b535e868f6af94052fae4e04ab8cb9192\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6396476b582de4682f63061b8c38324ea5e530d9227f520bad41f5fab1e3fa50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef9b463efd0451a165de686ba00645d799cddbf305a6f7227954dac78bbfe53e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1d3c2dec6b32ab9db2187d008e2416c05f2b7f47a588b0d7e8e4d9645eb0ffc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:10:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:10:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.640874 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2ca348b7bbc4d1a45d44f44b41a9ac16a5a2eb930bcca94bdc063228a2fa6eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8609a9fde690ff73308f4c7edc69d8ed00483968aad136fbda6d16ef25e105a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.649696 4757 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rmlkd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10107436-84cd-4f7f-8f92-2a403cdfe4e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c9f0df3b29955f0946fec505e6f47445e996a351833b8d6ed9233a1b60c52e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxxg9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:10:58Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rmlkd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:12:07Z is after 2025-08-24T17:21:41Z" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.652233 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.652361 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.652481 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.652594 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.652698 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:07Z","lastTransitionTime":"2026-01-29T15:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.754412 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.754448 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.754457 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.754471 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.754479 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:07Z","lastTransitionTime":"2026-01-29T15:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.856659 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.856724 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.856746 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.856767 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.856782 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:07Z","lastTransitionTime":"2026-01-29T15:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.959324 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.959386 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.959398 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.959415 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:07 crc kubenswrapper[4757]: I0129 15:12:07.959426 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:07Z","lastTransitionTime":"2026-01-29T15:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.062508 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.062557 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.062569 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.062586 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.062598 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:08Z","lastTransitionTime":"2026-01-29T15:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.166142 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.166549 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.166647 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.166745 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.166869 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:08Z","lastTransitionTime":"2026-01-29T15:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.269060 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.269123 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.269134 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.269150 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.269162 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:08Z","lastTransitionTime":"2026-01-29T15:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.372110 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.372449 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.372551 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.372657 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.372748 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:08Z","lastTransitionTime":"2026-01-29T15:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.395532 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.395568 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.395568 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:08 crc kubenswrapper[4757]: E0129 15:12:08.396003 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:08 crc kubenswrapper[4757]: E0129 15:12:08.396051 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:08 crc kubenswrapper[4757]: E0129 15:12:08.395810 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.435015 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 00:20:45.582156131 +0000 UTC Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.476016 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.476302 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.476451 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.476564 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.476649 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:08Z","lastTransitionTime":"2026-01-29T15:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.579254 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.579541 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.579622 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.579694 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.579849 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:08Z","lastTransitionTime":"2026-01-29T15:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.682155 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.682572 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.682703 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.682793 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.682871 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:08Z","lastTransitionTime":"2026-01-29T15:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.785878 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.785943 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.785968 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.785996 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.786013 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:08Z","lastTransitionTime":"2026-01-29T15:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.889092 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.889141 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.889154 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.889172 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.889185 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:08Z","lastTransitionTime":"2026-01-29T15:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.991623 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.991701 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.991727 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.991761 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:08 crc kubenswrapper[4757]: I0129 15:12:08.991785 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:08Z","lastTransitionTime":"2026-01-29T15:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.094150 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.094213 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.094234 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.094324 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.094351 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:09Z","lastTransitionTime":"2026-01-29T15:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.197993 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.198061 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.198101 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.198130 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.198152 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:09Z","lastTransitionTime":"2026-01-29T15:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.301973 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.302012 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.302023 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.302039 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.302050 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:09Z","lastTransitionTime":"2026-01-29T15:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.396474 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:09 crc kubenswrapper[4757]: E0129 15:12:09.396605 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.404173 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.404244 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.404301 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.404330 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.404347 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:09Z","lastTransitionTime":"2026-01-29T15:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.409742 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.435836 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:26:03.061390371 +0000 UTC Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.507827 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.507860 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.507872 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.507896 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.507908 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:09Z","lastTransitionTime":"2026-01-29T15:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.611022 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.611089 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.611113 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.611193 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.611221 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:09Z","lastTransitionTime":"2026-01-29T15:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.715635 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.715697 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.715733 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.715763 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.715783 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:09Z","lastTransitionTime":"2026-01-29T15:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.818756 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.818805 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.818821 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.818843 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.818859 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:09Z","lastTransitionTime":"2026-01-29T15:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.921101 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.921152 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.921160 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.921175 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:09 crc kubenswrapper[4757]: I0129 15:12:09.921185 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:09Z","lastTransitionTime":"2026-01-29T15:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.023681 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.023726 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.023743 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.023763 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.023780 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:10Z","lastTransitionTime":"2026-01-29T15:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.126967 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.127105 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.127116 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.127130 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.127183 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:10Z","lastTransitionTime":"2026-01-29T15:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.230029 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.230082 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.230098 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.230124 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.230141 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:10Z","lastTransitionTime":"2026-01-29T15:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.332993 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.333033 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.333044 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.333066 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.333080 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:10Z","lastTransitionTime":"2026-01-29T15:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.396203 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.396344 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:10 crc kubenswrapper[4757]: E0129 15:12:10.396442 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.396541 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:10 crc kubenswrapper[4757]: E0129 15:12:10.396956 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:10 crc kubenswrapper[4757]: E0129 15:12:10.397116 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.397314 4757 scope.go:117] "RemoveContainer" containerID="5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee" Jan 29 15:12:10 crc kubenswrapper[4757]: E0129 15:12:10.397497 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.435865 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.435999 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.436020 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.436040 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.436055 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:10Z","lastTransitionTime":"2026-01-29T15:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.436084 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:44:54.106402627 +0000 UTC Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.539118 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.539162 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.539173 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.539191 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.539204 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:10Z","lastTransitionTime":"2026-01-29T15:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.642149 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.642371 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.642489 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.642621 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.642700 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:10Z","lastTransitionTime":"2026-01-29T15:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.745565 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.745596 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.745604 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.745617 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.745626 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:10Z","lastTransitionTime":"2026-01-29T15:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.848705 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.848759 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.848775 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.848794 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.848807 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:10Z","lastTransitionTime":"2026-01-29T15:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.951332 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.951414 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.951426 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.951458 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:10 crc kubenswrapper[4757]: I0129 15:12:10.951471 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:10Z","lastTransitionTime":"2026-01-29T15:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.054211 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.054359 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.054387 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.054607 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.054636 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:11Z","lastTransitionTime":"2026-01-29T15:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.157649 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.157743 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.157775 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.157810 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.157840 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:11Z","lastTransitionTime":"2026-01-29T15:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.261033 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.261085 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.261099 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.261150 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.261175 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:11Z","lastTransitionTime":"2026-01-29T15:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.364735 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.364789 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.364803 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.364822 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.364835 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:11Z","lastTransitionTime":"2026-01-29T15:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.395569 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:11 crc kubenswrapper[4757]: E0129 15:12:11.395745 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.437126 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 16:31:44.890818437 +0000 UTC Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.466776 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.466806 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.466814 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.466828 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.466836 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:11Z","lastTransitionTime":"2026-01-29T15:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.568723 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.568980 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.569061 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.569141 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.569214 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:11Z","lastTransitionTime":"2026-01-29T15:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.672072 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.672395 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.672515 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.672633 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.672735 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:11Z","lastTransitionTime":"2026-01-29T15:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.775536 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.775567 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.775576 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.775588 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.775597 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:11Z","lastTransitionTime":"2026-01-29T15:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.877808 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.877864 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.877877 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.877898 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.877917 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:11Z","lastTransitionTime":"2026-01-29T15:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.979621 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.979653 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.979663 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.979679 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:11 crc kubenswrapper[4757]: I0129 15:12:11.979689 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:11Z","lastTransitionTime":"2026-01-29T15:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.082579 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.082611 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.082622 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.082637 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.082647 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:12Z","lastTransitionTime":"2026-01-29T15:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.185360 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.185618 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.185702 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.185788 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.185873 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:12Z","lastTransitionTime":"2026-01-29T15:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.288615 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.288646 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.288657 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.288674 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.288685 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:12Z","lastTransitionTime":"2026-01-29T15:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.391562 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.391648 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.391665 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.391728 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.391744 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:12Z","lastTransitionTime":"2026-01-29T15:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.395872 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.395965 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:12 crc kubenswrapper[4757]: E0129 15:12:12.396025 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:12 crc kubenswrapper[4757]: E0129 15:12:12.396142 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.396239 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:12 crc kubenswrapper[4757]: E0129 15:12:12.396387 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.437796 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:50:55.629498715 +0000 UTC Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.494257 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.494301 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.494313 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.494328 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.494337 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:12Z","lastTransitionTime":"2026-01-29T15:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.596457 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.596502 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.596520 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.596536 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.596546 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:12Z","lastTransitionTime":"2026-01-29T15:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.698142 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.698473 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.698549 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.698615 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.698678 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:12Z","lastTransitionTime":"2026-01-29T15:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.801347 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.801599 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.801723 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.801826 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.801911 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:12Z","lastTransitionTime":"2026-01-29T15:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.905241 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.905297 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.905310 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.905328 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:12 crc kubenswrapper[4757]: I0129 15:12:12.905340 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:12Z","lastTransitionTime":"2026-01-29T15:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.007671 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.007731 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.007750 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.007775 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.007794 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:13Z","lastTransitionTime":"2026-01-29T15:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.110516 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.110544 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.110552 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.110566 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.110574 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:13Z","lastTransitionTime":"2026-01-29T15:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.212701 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.212899 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.213044 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.213114 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.213169 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:13Z","lastTransitionTime":"2026-01-29T15:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.315625 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.315670 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.315682 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.315695 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.315704 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:13Z","lastTransitionTime":"2026-01-29T15:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.395616 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:13 crc kubenswrapper[4757]: E0129 15:12:13.395850 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.417322 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.417360 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.417370 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.417387 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.417399 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:13Z","lastTransitionTime":"2026-01-29T15:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.438735 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:55:07.094353267 +0000 UTC Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.522558 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.522593 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.522601 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.522615 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.522624 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:13Z","lastTransitionTime":"2026-01-29T15:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.624773 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.624805 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.624815 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.624831 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.624840 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:13Z","lastTransitionTime":"2026-01-29T15:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.727416 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.727471 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.727489 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.727507 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.727522 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:13Z","lastTransitionTime":"2026-01-29T15:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.830194 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.830246 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.830257 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.830310 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.830323 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:13Z","lastTransitionTime":"2026-01-29T15:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.932344 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.932389 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.932398 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.932410 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:13 crc kubenswrapper[4757]: I0129 15:12:13.932420 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:13Z","lastTransitionTime":"2026-01-29T15:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.034817 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.034861 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.034873 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.034892 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.034904 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:14Z","lastTransitionTime":"2026-01-29T15:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.137027 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.137071 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.137082 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.137098 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.137114 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:14Z","lastTransitionTime":"2026-01-29T15:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.240328 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.240454 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.240476 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.240511 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.240533 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:14Z","lastTransitionTime":"2026-01-29T15:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.342314 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.342351 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.342361 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.342377 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.342388 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:14Z","lastTransitionTime":"2026-01-29T15:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.396162 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.396172 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.396172 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:14 crc kubenswrapper[4757]: E0129 15:12:14.396416 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:14 crc kubenswrapper[4757]: E0129 15:12:14.396563 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:14 crc kubenswrapper[4757]: E0129 15:12:14.396636 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.439103 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:15:55.1542131 +0000 UTC Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.445428 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.445483 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.445494 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.445509 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.445519 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:14Z","lastTransitionTime":"2026-01-29T15:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.548676 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.548746 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.548784 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.548818 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.548839 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:14Z","lastTransitionTime":"2026-01-29T15:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.652292 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.652334 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.652342 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.652356 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.652364 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:14Z","lastTransitionTime":"2026-01-29T15:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.755112 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.755169 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.755181 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.755203 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.755224 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:14Z","lastTransitionTime":"2026-01-29T15:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.862827 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.862875 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.862888 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.862906 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.862922 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:14Z","lastTransitionTime":"2026-01-29T15:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.967350 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.967411 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.967433 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.967463 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:14 crc kubenswrapper[4757]: I0129 15:12:14.967486 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:14Z","lastTransitionTime":"2026-01-29T15:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.069555 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.069606 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.069622 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.069641 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.069657 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:15Z","lastTransitionTime":"2026-01-29T15:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.171604 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.171644 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.171652 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.171664 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.171675 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:15Z","lastTransitionTime":"2026-01-29T15:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.274079 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.274125 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.274135 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.274151 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.274162 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:15Z","lastTransitionTime":"2026-01-29T15:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.376648 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.376721 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.376734 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.376749 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.376758 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:15Z","lastTransitionTime":"2026-01-29T15:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.396278 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:15 crc kubenswrapper[4757]: E0129 15:12:15.396431 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.439534 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:23:36.020164541 +0000 UTC Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.479636 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.479710 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.479722 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.479738 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.479750 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:15Z","lastTransitionTime":"2026-01-29T15:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.582091 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.582137 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.582153 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.582172 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.582186 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:15Z","lastTransitionTime":"2026-01-29T15:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.684769 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.684819 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.684829 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.684842 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.684850 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:15Z","lastTransitionTime":"2026-01-29T15:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.787476 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.787528 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.787544 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.787564 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.787576 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:15Z","lastTransitionTime":"2026-01-29T15:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.889531 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.889573 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.889586 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.889603 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.889616 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:15Z","lastTransitionTime":"2026-01-29T15:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.915409 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.915460 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.915471 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.915490 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.915501 4757 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:12:15Z","lastTransitionTime":"2026-01-29T15:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.953887 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq"] Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.954361 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.964495 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.964752 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.964863 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 15:12:15 crc kubenswrapper[4757]: I0129 15:12:15.964650 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.038806 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podStartSLOduration=77.038785757 podStartE2EDuration="1m17.038785757s" podCreationTimestamp="2026-01-29 15:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:16.024693472 +0000 UTC m=+99.313943709" watchObservedRunningTime="2026-01-29 15:12:16.038785757 +0000 UTC m=+99.328035994" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.050735 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5040bada-fcbf-475a-bce4-fc07491f7ab4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.050796 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5040bada-fcbf-475a-bce4-fc07491f7ab4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.050815 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5040bada-fcbf-475a-bce4-fc07491f7ab4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.050844 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5040bada-fcbf-475a-bce4-fc07491f7ab4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.050877 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5040bada-fcbf-475a-bce4-fc07491f7ab4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.060431 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6v5r7" podStartSLOduration=76.060413704 podStartE2EDuration="1m16.060413704s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:16.039092226 +0000 UTC m=+99.328342483" watchObservedRunningTime="2026-01-29 15:12:16.060413704 +0000 UTC m=+99.349663941" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.076840 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bcbdt" podStartSLOduration=77.07680844 podStartE2EDuration="1m17.07680844s" podCreationTimestamp="2026-01-29 15:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:16.076622954 +0000 UTC m=+99.365873191" watchObservedRunningTime="2026-01-29 15:12:16.07680844 +0000 UTC m=+99.366058687" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.093800 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dxk67" podStartSLOduration=77.093783653 podStartE2EDuration="1m17.093783653s" podCreationTimestamp="2026-01-29 15:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:16.092169804 +0000 UTC m=+99.381420051" watchObservedRunningTime="2026-01-29 15:12:16.093783653 +0000 UTC m=+99.383033890" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.134078 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=76.134049626 podStartE2EDuration="1m16.134049626s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:16.133981484 +0000 UTC m=+99.423231721" watchObservedRunningTime="2026-01-29 15:12:16.134049626 +0000 UTC m=+99.423299853" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.152180 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5040bada-fcbf-475a-bce4-fc07491f7ab4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.152221 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5040bada-fcbf-475a-bce4-fc07491f7ab4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.152244 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5040bada-fcbf-475a-bce4-fc07491f7ab4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.152318 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5040bada-fcbf-475a-bce4-fc07491f7ab4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.152362 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5040bada-fcbf-475a-bce4-fc07491f7ab4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.152384 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/5040bada-fcbf-475a-bce4-fc07491f7ab4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.152465 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/5040bada-fcbf-475a-bce4-fc07491f7ab4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.153322 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5040bada-fcbf-475a-bce4-fc07491f7ab4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.158139 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5040bada-fcbf-475a-bce4-fc07491f7ab4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.185957 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5040bada-fcbf-475a-bce4-fc07491f7ab4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-dcrfq\" (UID: \"5040bada-fcbf-475a-bce4-fc07491f7ab4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.228851 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.22883446 podStartE2EDuration="1m16.22883446s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:16.21165148 +0000 UTC m=+99.500901737" watchObservedRunningTime="2026-01-29 15:12:16.22883446 +0000 UTC m=+99.518084707" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.229008 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.229004385 podStartE2EDuration="48.229004385s" podCreationTimestamp="2026-01-29 15:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:16.228566742 +0000 UTC m=+99.517816979" watchObservedRunningTime="2026-01-29 15:12:16.229004385 +0000 UTC m=+99.518254622" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.268359 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-rmlkd" podStartSLOduration=79.268340639 podStartE2EDuration="1m19.268340639s" podCreationTimestamp="2026-01-29 15:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:16.258183645 +0000 UTC m=+99.547433892" watchObservedRunningTime="2026-01-29 15:12:16.268340639 +0000 UTC m=+99.557590876" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.268480 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qxr9t" podStartSLOduration=77.268476683 podStartE2EDuration="1m17.268476683s" podCreationTimestamp="2026-01-29 15:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:16.267897105 +0000 UTC m=+99.557147342" watchObservedRunningTime="2026-01-29 15:12:16.268476683 +0000 UTC m=+99.557726920" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.279949 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.396178 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.396178 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.396311 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:16 crc kubenswrapper[4757]: E0129 15:12:16.396322 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:16 crc kubenswrapper[4757]: E0129 15:12:16.396488 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:16 crc kubenswrapper[4757]: E0129 15:12:16.396487 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.440311 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:27:14.02868687 +0000 UTC Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.440384 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.448460 4757 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.942887 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" event={"ID":"5040bada-fcbf-475a-bce4-fc07491f7ab4","Type":"ContainerStarted","Data":"29afca6cbf096775884b6f46176f82643ce3c14713fb3b32cfde364efe12fc27"} Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.943000 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" event={"ID":"5040bada-fcbf-475a-bce4-fc07491f7ab4","Type":"ContainerStarted","Data":"92b727e09d9abb75cfff6fe497406af938a5967ed6f83227e9d2ac8f59d6318a"} Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.959187 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-dcrfq" podStartSLOduration=76.959164711 podStartE2EDuration="1m16.959164711s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:16.958218691 +0000 UTC m=+100.247468988" watchObservedRunningTime="2026-01-29 15:12:16.959164711 +0000 UTC m=+100.248414968" Jan 29 15:12:16 crc kubenswrapper[4757]: I0129 15:12:16.959510 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.959504691 podStartE2EDuration="7.959504691s" podCreationTimestamp="2026-01-29 15:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:16.29333542 +0000 UTC m=+99.582585657" watchObservedRunningTime="2026-01-29 15:12:16.959504691 +0000 UTC m=+100.248754938" Jan 29 15:12:17 crc kubenswrapper[4757]: I0129 15:12:17.395662 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:17 crc kubenswrapper[4757]: E0129 15:12:17.396853 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:17 crc kubenswrapper[4757]: I0129 15:12:17.669736 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:17 crc kubenswrapper[4757]: E0129 15:12:17.669869 4757 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:12:17 crc kubenswrapper[4757]: E0129 15:12:17.669926 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs podName:8c722d3b-1755-4633-967e-35591890a231 nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.669909268 +0000 UTC m=+164.959159505 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs") pod "network-metrics-daemon-drtf8" (UID: "8c722d3b-1755-4633-967e-35591890a231") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:12:18 crc kubenswrapper[4757]: I0129 15:12:18.396105 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:18 crc kubenswrapper[4757]: I0129 15:12:18.396154 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:18 crc kubenswrapper[4757]: I0129 15:12:18.396179 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:18 crc kubenswrapper[4757]: E0129 15:12:18.396325 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:18 crc kubenswrapper[4757]: E0129 15:12:18.396400 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:18 crc kubenswrapper[4757]: E0129 15:12:18.396514 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:19 crc kubenswrapper[4757]: I0129 15:12:19.396315 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:19 crc kubenswrapper[4757]: E0129 15:12:19.396482 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:20 crc kubenswrapper[4757]: I0129 15:12:20.395497 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:20 crc kubenswrapper[4757]: I0129 15:12:20.395528 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:20 crc kubenswrapper[4757]: I0129 15:12:20.395592 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:20 crc kubenswrapper[4757]: E0129 15:12:20.395796 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:20 crc kubenswrapper[4757]: E0129 15:12:20.395899 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:20 crc kubenswrapper[4757]: E0129 15:12:20.395648 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:21 crc kubenswrapper[4757]: I0129 15:12:21.395587 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:21 crc kubenswrapper[4757]: E0129 15:12:21.395716 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:22 crc kubenswrapper[4757]: I0129 15:12:22.396197 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:22 crc kubenswrapper[4757]: I0129 15:12:22.396358 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:22 crc kubenswrapper[4757]: E0129 15:12:22.396657 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:22 crc kubenswrapper[4757]: I0129 15:12:22.396671 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:22 crc kubenswrapper[4757]: E0129 15:12:22.396803 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:22 crc kubenswrapper[4757]: E0129 15:12:22.396868 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:22 crc kubenswrapper[4757]: I0129 15:12:22.414302 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 15:12:23 crc kubenswrapper[4757]: I0129 15:12:23.395555 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:23 crc kubenswrapper[4757]: E0129 15:12:23.395973 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:23 crc kubenswrapper[4757]: I0129 15:12:23.396297 4757 scope.go:117] "RemoveContainer" containerID="5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee" Jan 29 15:12:23 crc kubenswrapper[4757]: E0129 15:12:23.396464 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8fwvd_openshift-ovn-kubernetes(e6815a1b-56eb-4075-84ae-1af5d0dcb742)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" Jan 29 15:12:24 crc kubenswrapper[4757]: I0129 15:12:24.395803 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:24 crc kubenswrapper[4757]: I0129 15:12:24.395852 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:24 crc kubenswrapper[4757]: I0129 15:12:24.395853 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:24 crc kubenswrapper[4757]: E0129 15:12:24.395920 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:24 crc kubenswrapper[4757]: E0129 15:12:24.396073 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:24 crc kubenswrapper[4757]: E0129 15:12:24.396165 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:25 crc kubenswrapper[4757]: I0129 15:12:25.395737 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:25 crc kubenswrapper[4757]: E0129 15:12:25.395857 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:26 crc kubenswrapper[4757]: I0129 15:12:26.395495 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:26 crc kubenswrapper[4757]: I0129 15:12:26.395495 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:26 crc kubenswrapper[4757]: I0129 15:12:26.395657 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:26 crc kubenswrapper[4757]: E0129 15:12:26.395755 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:26 crc kubenswrapper[4757]: E0129 15:12:26.395908 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:26 crc kubenswrapper[4757]: E0129 15:12:26.395977 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:27 crc kubenswrapper[4757]: I0129 15:12:27.395977 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:27 crc kubenswrapper[4757]: E0129 15:12:27.396967 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:27 crc kubenswrapper[4757]: I0129 15:12:27.427521 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.427490323 podStartE2EDuration="5.427490323s" podCreationTimestamp="2026-01-29 15:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:27.424987225 +0000 UTC m=+110.714237462" watchObservedRunningTime="2026-01-29 15:12:27.427490323 +0000 UTC m=+110.716740600" Jan 29 15:12:28 crc kubenswrapper[4757]: I0129 15:12:28.395584 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:28 crc kubenswrapper[4757]: I0129 15:12:28.396246 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:28 crc kubenswrapper[4757]: I0129 15:12:28.396813 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:28 crc kubenswrapper[4757]: E0129 15:12:28.396967 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:28 crc kubenswrapper[4757]: E0129 15:12:28.397360 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:28 crc kubenswrapper[4757]: E0129 15:12:28.397672 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:29 crc kubenswrapper[4757]: I0129 15:12:29.405680 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:29 crc kubenswrapper[4757]: E0129 15:12:29.405822 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:30 crc kubenswrapper[4757]: I0129 15:12:30.396123 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:30 crc kubenswrapper[4757]: E0129 15:12:30.396638 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:30 crc kubenswrapper[4757]: I0129 15:12:30.396703 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:30 crc kubenswrapper[4757]: E0129 15:12:30.396871 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:30 crc kubenswrapper[4757]: I0129 15:12:30.396737 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:30 crc kubenswrapper[4757]: E0129 15:12:30.397050 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:31 crc kubenswrapper[4757]: I0129 15:12:31.395600 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:31 crc kubenswrapper[4757]: E0129 15:12:31.395910 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:32 crc kubenswrapper[4757]: I0129 15:12:32.395880 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:32 crc kubenswrapper[4757]: I0129 15:12:32.395916 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:32 crc kubenswrapper[4757]: I0129 15:12:32.395898 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:32 crc kubenswrapper[4757]: E0129 15:12:32.396014 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:32 crc kubenswrapper[4757]: E0129 15:12:32.396068 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:32 crc kubenswrapper[4757]: E0129 15:12:32.396130 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:33 crc kubenswrapper[4757]: I0129 15:12:33.396243 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:33 crc kubenswrapper[4757]: E0129 15:12:33.396469 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:34 crc kubenswrapper[4757]: I0129 15:12:34.395823 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:34 crc kubenswrapper[4757]: I0129 15:12:34.395907 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:34 crc kubenswrapper[4757]: I0129 15:12:34.395975 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:34 crc kubenswrapper[4757]: E0129 15:12:34.396098 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:34 crc kubenswrapper[4757]: E0129 15:12:34.396371 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:34 crc kubenswrapper[4757]: E0129 15:12:34.396566 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:35 crc kubenswrapper[4757]: I0129 15:12:35.396052 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:35 crc kubenswrapper[4757]: E0129 15:12:35.396201 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:36 crc kubenswrapper[4757]: I0129 15:12:36.009867 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bcbdt_fe6866d7-5a43-46d5-ba84-264847f9cd30/kube-multus/1.log" Jan 29 15:12:36 crc kubenswrapper[4757]: I0129 15:12:36.011089 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bcbdt_fe6866d7-5a43-46d5-ba84-264847f9cd30/kube-multus/0.log" Jan 29 15:12:36 crc kubenswrapper[4757]: I0129 15:12:36.011203 4757 generic.go:334] "Generic (PLEG): container finished" podID="fe6866d7-5a43-46d5-ba84-264847f9cd30" containerID="06723594ec631b4e23ea44dab6453e705a548052738d6da15ae230b788e10933" exitCode=1 Jan 29 15:12:36 crc kubenswrapper[4757]: I0129 15:12:36.011317 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bcbdt" event={"ID":"fe6866d7-5a43-46d5-ba84-264847f9cd30","Type":"ContainerDied","Data":"06723594ec631b4e23ea44dab6453e705a548052738d6da15ae230b788e10933"} Jan 29 15:12:36 crc kubenswrapper[4757]: I0129 15:12:36.011395 4757 scope.go:117] "RemoveContainer" containerID="8935e5e4e1f9f2b9ab89e1e457d9ee60fde50e7d21dddb1f44cd71341f9ed0a2" Jan 29 15:12:36 crc kubenswrapper[4757]: I0129 15:12:36.012163 4757 scope.go:117] "RemoveContainer" containerID="06723594ec631b4e23ea44dab6453e705a548052738d6da15ae230b788e10933" Jan 29 15:12:36 crc kubenswrapper[4757]: E0129 15:12:36.012415 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-bcbdt_openshift-multus(fe6866d7-5a43-46d5-ba84-264847f9cd30)\"" pod="openshift-multus/multus-bcbdt" podUID="fe6866d7-5a43-46d5-ba84-264847f9cd30" Jan 29 15:12:36 crc kubenswrapper[4757]: I0129 15:12:36.396101 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:36 crc kubenswrapper[4757]: E0129 15:12:36.396259 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:36 crc kubenswrapper[4757]: I0129 15:12:36.396117 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:36 crc kubenswrapper[4757]: E0129 15:12:36.396559 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:36 crc kubenswrapper[4757]: I0129 15:12:36.396630 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:36 crc kubenswrapper[4757]: E0129 15:12:36.396886 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:37 crc kubenswrapper[4757]: I0129 15:12:37.016384 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bcbdt_fe6866d7-5a43-46d5-ba84-264847f9cd30/kube-multus/1.log" Jan 29 15:12:37 crc kubenswrapper[4757]: E0129 15:12:37.391452 4757 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 15:12:37 crc kubenswrapper[4757]: I0129 15:12:37.396172 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:37 crc kubenswrapper[4757]: E0129 15:12:37.397654 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:37 crc kubenswrapper[4757]: I0129 15:12:37.398043 4757 scope.go:117] "RemoveContainer" containerID="5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee" Jan 29 15:12:37 crc kubenswrapper[4757]: E0129 15:12:37.476881 4757 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:12:38 crc kubenswrapper[4757]: I0129 15:12:38.022722 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/3.log" Jan 29 15:12:38 crc kubenswrapper[4757]: I0129 15:12:38.025226 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerStarted","Data":"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe"} Jan 29 15:12:38 crc kubenswrapper[4757]: I0129 15:12:38.025640 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:12:38 crc kubenswrapper[4757]: I0129 15:12:38.396138 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:38 crc kubenswrapper[4757]: I0129 15:12:38.396198 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:38 crc kubenswrapper[4757]: I0129 15:12:38.396256 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:38 crc kubenswrapper[4757]: E0129 15:12:38.396288 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:38 crc kubenswrapper[4757]: E0129 15:12:38.396369 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:38 crc kubenswrapper[4757]: E0129 15:12:38.396513 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:38 crc kubenswrapper[4757]: I0129 15:12:38.445326 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podStartSLOduration=98.445255536 podStartE2EDuration="1m38.445255536s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:12:38.063378925 +0000 UTC m=+121.352629172" watchObservedRunningTime="2026-01-29 15:12:38.445255536 +0000 UTC m=+121.734505773" Jan 29 15:12:38 crc kubenswrapper[4757]: I0129 15:12:38.446384 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-drtf8"] Jan 29 15:12:38 crc kubenswrapper[4757]: I0129 15:12:38.446513 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:38 crc kubenswrapper[4757]: E0129 15:12:38.446640 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:40 crc kubenswrapper[4757]: I0129 15:12:40.395366 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:40 crc kubenswrapper[4757]: I0129 15:12:40.395423 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:40 crc kubenswrapper[4757]: I0129 15:12:40.395462 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:40 crc kubenswrapper[4757]: E0129 15:12:40.395597 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:40 crc kubenswrapper[4757]: E0129 15:12:40.395728 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:40 crc kubenswrapper[4757]: E0129 15:12:40.395954 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:40 crc kubenswrapper[4757]: I0129 15:12:40.396369 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:40 crc kubenswrapper[4757]: E0129 15:12:40.396574 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:42 crc kubenswrapper[4757]: I0129 15:12:42.396149 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:42 crc kubenswrapper[4757]: I0129 15:12:42.396227 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:42 crc kubenswrapper[4757]: E0129 15:12:42.396289 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:42 crc kubenswrapper[4757]: I0129 15:12:42.396351 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:42 crc kubenswrapper[4757]: I0129 15:12:42.396246 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:42 crc kubenswrapper[4757]: E0129 15:12:42.396468 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:42 crc kubenswrapper[4757]: E0129 15:12:42.396729 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:42 crc kubenswrapper[4757]: E0129 15:12:42.396987 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:42 crc kubenswrapper[4757]: E0129 15:12:42.478637 4757 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:12:44 crc kubenswrapper[4757]: I0129 15:12:44.395345 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:44 crc kubenswrapper[4757]: I0129 15:12:44.395350 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:44 crc kubenswrapper[4757]: E0129 15:12:44.396121 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:44 crc kubenswrapper[4757]: I0129 15:12:44.395463 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:44 crc kubenswrapper[4757]: E0129 15:12:44.396365 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:44 crc kubenswrapper[4757]: I0129 15:12:44.395406 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:44 crc kubenswrapper[4757]: E0129 15:12:44.395938 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:44 crc kubenswrapper[4757]: E0129 15:12:44.396513 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:46 crc kubenswrapper[4757]: I0129 15:12:46.396241 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:46 crc kubenswrapper[4757]: I0129 15:12:46.396355 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:46 crc kubenswrapper[4757]: I0129 15:12:46.396253 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:46 crc kubenswrapper[4757]: I0129 15:12:46.396245 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:46 crc kubenswrapper[4757]: E0129 15:12:46.396449 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:46 crc kubenswrapper[4757]: E0129 15:12:46.396581 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:46 crc kubenswrapper[4757]: E0129 15:12:46.396691 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:46 crc kubenswrapper[4757]: I0129 15:12:46.397162 4757 scope.go:117] "RemoveContainer" containerID="06723594ec631b4e23ea44dab6453e705a548052738d6da15ae230b788e10933" Jan 29 15:12:46 crc kubenswrapper[4757]: E0129 15:12:46.397260 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:47 crc kubenswrapper[4757]: I0129 15:12:47.058346 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bcbdt_fe6866d7-5a43-46d5-ba84-264847f9cd30/kube-multus/1.log" Jan 29 15:12:47 crc kubenswrapper[4757]: I0129 15:12:47.058399 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bcbdt" event={"ID":"fe6866d7-5a43-46d5-ba84-264847f9cd30","Type":"ContainerStarted","Data":"859df83d243d00747696baf633188d0927d51a4929ba5fc0bb8c0ad484d17f9d"} Jan 29 15:12:47 crc kubenswrapper[4757]: E0129 15:12:47.479233 4757 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:12:48 crc kubenswrapper[4757]: I0129 15:12:48.395470 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:48 crc kubenswrapper[4757]: E0129 15:12:48.395587 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:48 crc kubenswrapper[4757]: I0129 15:12:48.395470 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:48 crc kubenswrapper[4757]: I0129 15:12:48.395659 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:48 crc kubenswrapper[4757]: I0129 15:12:48.395495 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:48 crc kubenswrapper[4757]: E0129 15:12:48.395792 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:48 crc kubenswrapper[4757]: E0129 15:12:48.395834 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:48 crc kubenswrapper[4757]: E0129 15:12:48.395888 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:50 crc kubenswrapper[4757]: I0129 15:12:50.395703 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:50 crc kubenswrapper[4757]: I0129 15:12:50.395705 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:50 crc kubenswrapper[4757]: E0129 15:12:50.396201 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:50 crc kubenswrapper[4757]: I0129 15:12:50.395980 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:50 crc kubenswrapper[4757]: E0129 15:12:50.396335 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:50 crc kubenswrapper[4757]: E0129 15:12:50.396114 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:50 crc kubenswrapper[4757]: I0129 15:12:50.396004 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:50 crc kubenswrapper[4757]: E0129 15:12:50.397954 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:52 crc kubenswrapper[4757]: I0129 15:12:52.395666 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:52 crc kubenswrapper[4757]: E0129 15:12:52.395877 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-drtf8" podUID="8c722d3b-1755-4633-967e-35591890a231" Jan 29 15:12:52 crc kubenswrapper[4757]: I0129 15:12:52.395965 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:52 crc kubenswrapper[4757]: I0129 15:12:52.395985 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:52 crc kubenswrapper[4757]: E0129 15:12:52.396133 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:12:52 crc kubenswrapper[4757]: E0129 15:12:52.396185 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:12:52 crc kubenswrapper[4757]: I0129 15:12:52.396694 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:52 crc kubenswrapper[4757]: E0129 15:12:52.396861 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:12:54 crc kubenswrapper[4757]: I0129 15:12:54.396096 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:12:54 crc kubenswrapper[4757]: I0129 15:12:54.396107 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:12:54 crc kubenswrapper[4757]: I0129 15:12:54.396149 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:12:54 crc kubenswrapper[4757]: I0129 15:12:54.396172 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:12:54 crc kubenswrapper[4757]: I0129 15:12:54.400831 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 15:12:54 crc kubenswrapper[4757]: I0129 15:12:54.400937 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 15:12:54 crc kubenswrapper[4757]: I0129 15:12:54.403248 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 15:12:54 crc kubenswrapper[4757]: I0129 15:12:54.403453 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 15:12:54 crc kubenswrapper[4757]: I0129 15:12:54.403615 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 15:12:54 crc kubenswrapper[4757]: I0129 15:12:54.403685 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.523821 4757 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.572117 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.573084 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.573234 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.574632 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.579002 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.579688 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.581406 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.582176 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.582243 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.582363 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.582917 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.586982 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.587208 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.587462 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.587617 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.587782 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.589539 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jkgsj"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.589991 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.590507 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.590674 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.590850 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.593427 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.594456 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.595336 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.595874 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.596560 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hghqd"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.597136 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.598405 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.598443 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.598544 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.598612 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-m6jnj"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.599006 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.599836 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8tbgk"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.600342 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.601774 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z9qzn"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.602564 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.605995 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kjgkg"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.606659 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.607323 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.607496 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.615116 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.615594 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.616047 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.616135 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.616359 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.616873 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.623035 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.623035 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.623828 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.625994 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.626848 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.627198 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.628375 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.627617 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.627639 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.630150 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701392 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dacc418b-f809-4317-9526-08c5781c6f68-client-ca\") pod \"route-controller-manager-6576b87f9c-pf59m\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701426 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacc418b-f809-4317-9526-08c5781c6f68-serving-cert\") pod \"route-controller-manager-6576b87f9c-pf59m\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701457 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84fvf\" (UniqueName: \"kubernetes.io/projected/dacc418b-f809-4317-9526-08c5781c6f68-kube-api-access-84fvf\") pod \"route-controller-manager-6576b87f9c-pf59m\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701475 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjjb9\" (UniqueName: \"kubernetes.io/projected/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-kube-api-access-gjjb9\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701493 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701516 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f-config\") pod \"console-operator-58897d9998-jkgsj\" (UID: \"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f\") " pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701531 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbp9x\" (UniqueName: \"kubernetes.io/projected/1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f-kube-api-access-mbp9x\") pod \"console-operator-58897d9998-jkgsj\" (UID: \"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f\") " pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701548 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-audit-policies\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701560 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701574 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-etcd-client\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701606 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-encryption-config\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701621 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-audit-dir\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701634 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f-serving-cert\") pod \"console-operator-58897d9998-jkgsj\" (UID: \"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f\") " pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701650 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f-trusted-ca\") pod \"console-operator-58897d9998-jkgsj\" (UID: \"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f\") " pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701665 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacc418b-f809-4317-9526-08c5781c6f68-config\") pod \"route-controller-manager-6576b87f9c-pf59m\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.701688 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-serving-cert\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.702163 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.702548 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.702566 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.702650 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.702780 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.702808 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-skxmw"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.703013 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.703118 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.703190 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.703242 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-gs77j"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.703346 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.703409 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.703459 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.703512 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.703534 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.703601 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.703702 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gs77j" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.703608 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.704522 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.704822 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.705055 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.705136 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.705224 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.705439 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zrp48"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.706102 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.707116 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pvt9r"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.708624 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.709277 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.717136 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.717436 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.717619 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.717853 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.717939 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.718393 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.718591 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pvt9r" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.719134 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.719853 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.720158 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.720303 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.721781 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k5nbp"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.722317 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.722840 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-trnpt"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.724438 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.724538 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.724658 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.724850 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.724954 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.725129 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.725255 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.725344 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.725448 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.725566 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.725624 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.727739 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.727944 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.728083 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.729142 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.729332 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.729487 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.729620 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.729638 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.729690 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.729836 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.729867 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.729877 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.729963 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.730366 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.730387 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.730553 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.730577 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.731282 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-trnpt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.732070 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.736312 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.736358 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.736786 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.738706 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.739197 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grbn4"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.743667 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.744629 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.746384 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.746573 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.749256 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.749555 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.749568 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.760562 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mg555"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.761796 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.762702 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.763094 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.763390 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.764097 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.765481 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.766097 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.766773 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.779642 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.780864 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.782468 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.782602 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.782698 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.782923 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.785489 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.786037 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-m7q76"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.786349 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-mfd9g"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.786440 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.786633 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.786699 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mfd9g" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.786948 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.788379 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.788946 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.789652 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.790020 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.790475 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.790696 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.791333 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.794048 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dkjl5"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.794613 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.795144 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.795241 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.795514 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.795707 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dkjl5" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.797779 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.797910 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.798448 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.798535 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h9rvk"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.799090 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.801421 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.801951 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802407 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c20ebd50-0f39-4321-84c3-1806672c78c0-auth-proxy-config\") pod \"machine-approver-56656f9798-2g78z\" (UID: \"c20ebd50-0f39-4321-84c3-1806672c78c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802437 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84fvf\" (UniqueName: \"kubernetes.io/projected/dacc418b-f809-4317-9526-08c5781c6f68-kube-api-access-84fvf\") pod \"route-controller-manager-6576b87f9c-pf59m\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802456 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrmz9\" (UniqueName: \"kubernetes.io/projected/ea61811e-2455-4157-a3f3-1376f4a11e8c-kube-api-access-mrmz9\") pod \"cluster-samples-operator-665b6dd947-fvpbt\" (UID: \"ea61811e-2455-4157-a3f3-1376f4a11e8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802473 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjjb9\" (UniqueName: \"kubernetes.io/projected/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-kube-api-access-gjjb9\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802490 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad7f4116-0c15-4b08-9edc-bacd65170a95-service-ca-bundle\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802513 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802530 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjp2x\" (UniqueName: \"kubernetes.io/projected/ad7f4116-0c15-4b08-9edc-bacd65170a95-kube-api-access-mjp2x\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802547 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea61811e-2455-4157-a3f3-1376f4a11e8c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fvpbt\" (UID: \"ea61811e-2455-4157-a3f3-1376f4a11e8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802562 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7f4116-0c15-4b08-9edc-bacd65170a95-config\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802580 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f-config\") pod \"console-operator-58897d9998-jkgsj\" (UID: \"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f\") " pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802594 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c20ebd50-0f39-4321-84c3-1806672c78c0-config\") pod \"machine-approver-56656f9798-2g78z\" (UID: \"c20ebd50-0f39-4321-84c3-1806672c78c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802611 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbp9x\" (UniqueName: \"kubernetes.io/projected/1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f-kube-api-access-mbp9x\") pod \"console-operator-58897d9998-jkgsj\" (UID: \"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f\") " pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802626 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bab27dde-a537-445c-8d39-ad7479b66bcb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z9qzn\" (UID: \"bab27dde-a537-445c-8d39-ad7479b66bcb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802643 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-audit-policies\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802659 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-etcd-client\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802674 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802719 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bab27dde-a537-445c-8d39-ad7479b66bcb-images\") pod \"machine-api-operator-5694c8668f-z9qzn\" (UID: \"bab27dde-a537-445c-8d39-ad7479b66bcb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802737 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-encryption-config\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802753 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab27dde-a537-445c-8d39-ad7479b66bcb-config\") pod \"machine-api-operator-5694c8668f-z9qzn\" (UID: \"bab27dde-a537-445c-8d39-ad7479b66bcb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802767 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn87t\" (UniqueName: \"kubernetes.io/projected/c20ebd50-0f39-4321-84c3-1806672c78c0-kube-api-access-bn87t\") pod \"machine-approver-56656f9798-2g78z\" (UID: \"c20ebd50-0f39-4321-84c3-1806672c78c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802781 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad7f4116-0c15-4b08-9edc-bacd65170a95-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802795 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-audit-dir\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802813 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f-serving-cert\") pod \"console-operator-58897d9998-jkgsj\" (UID: \"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f\") " pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802826 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f-trusted-ca\") pod \"console-operator-58897d9998-jkgsj\" (UID: \"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f\") " pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802840 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c20ebd50-0f39-4321-84c3-1806672c78c0-machine-approver-tls\") pod \"machine-approver-56656f9798-2g78z\" (UID: \"c20ebd50-0f39-4321-84c3-1806672c78c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802866 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacc418b-f809-4317-9526-08c5781c6f68-config\") pod \"route-controller-manager-6576b87f9c-pf59m\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802882 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-serving-cert\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802899 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdbjh\" (UniqueName: \"kubernetes.io/projected/bab27dde-a537-445c-8d39-ad7479b66bcb-kube-api-access-rdbjh\") pod \"machine-api-operator-5694c8668f-z9qzn\" (UID: \"bab27dde-a537-445c-8d39-ad7479b66bcb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802914 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad7f4116-0c15-4b08-9edc-bacd65170a95-serving-cert\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802932 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dacc418b-f809-4317-9526-08c5781c6f68-client-ca\") pod \"route-controller-manager-6576b87f9c-pf59m\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.802949 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacc418b-f809-4317-9526-08c5781c6f68-serving-cert\") pod \"route-controller-manager-6576b87f9c-pf59m\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.803337 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.803654 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.804092 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f-config\") pod \"console-operator-58897d9998-jkgsj\" (UID: \"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f\") " pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.804239 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-audit-policies\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.804653 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.804683 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.806691 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-audit-dir\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.808078 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.809259 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f-trusted-ca\") pod \"console-operator-58897d9998-jkgsj\" (UID: \"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f\") " pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.810040 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dacc418b-f809-4317-9526-08c5781c6f68-client-ca\") pod \"route-controller-manager-6576b87f9c-pf59m\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.811804 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacc418b-f809-4317-9526-08c5781c6f68-config\") pod \"route-controller-manager-6576b87f9c-pf59m\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.811839 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacc418b-f809-4317-9526-08c5781c6f68-serving-cert\") pod \"route-controller-manager-6576b87f9c-pf59m\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.812047 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.812144 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.812674 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.815340 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-serving-cert\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.817560 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-m6jnj"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.819707 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8tbgk"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.819752 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-etcd-client\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.820445 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f-serving-cert\") pod \"console-operator-58897d9998-jkgsj\" (UID: \"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f\") " pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.821829 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jkgsj"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.822397 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z9qzn"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.827255 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hghqd"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.827360 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.831423 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-encryption-config\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.833295 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-m7q76"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.838308 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.847527 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k5nbp"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.849741 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.852198 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zrp48"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.853424 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kjgkg"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.854344 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.855601 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-skxmw"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.857079 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.857285 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.858472 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wsz9t"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.864615 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-trnpt"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.864649 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.864734 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.869700 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.870901 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.874012 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.875235 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.876014 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grbn4"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.877697 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.879361 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gs77j"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.881554 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mg555"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.882592 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.885188 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.887434 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.889574 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pvt9r"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.894816 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.896060 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.899452 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.901580 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.903692 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9e9103bc-a2bb-4075-8454-c6f0af5c2c29-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hghqd\" (UID: \"9e9103bc-a2bb-4075-8454-c6f0af5c2c29\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.903760 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdbjh\" (UniqueName: \"kubernetes.io/projected/bab27dde-a537-445c-8d39-ad7479b66bcb-kube-api-access-rdbjh\") pod \"machine-api-operator-5694c8668f-z9qzn\" (UID: \"bab27dde-a537-445c-8d39-ad7479b66bcb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.903790 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad7f4116-0c15-4b08-9edc-bacd65170a95-serving-cert\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.903990 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.904061 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr74n\" (UniqueName: \"kubernetes.io/projected/9e9103bc-a2bb-4075-8454-c6f0af5c2c29-kube-api-access-wr74n\") pod \"openshift-config-operator-7777fb866f-hghqd\" (UID: \"9e9103bc-a2bb-4075-8454-c6f0af5c2c29\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.904117 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrmz9\" (UniqueName: \"kubernetes.io/projected/ea61811e-2455-4157-a3f3-1376f4a11e8c-kube-api-access-mrmz9\") pod \"cluster-samples-operator-665b6dd947-fvpbt\" (UID: \"ea61811e-2455-4157-a3f3-1376f4a11e8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.904144 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c20ebd50-0f39-4321-84c3-1806672c78c0-auth-proxy-config\") pod \"machine-approver-56656f9798-2g78z\" (UID: \"c20ebd50-0f39-4321-84c3-1806672c78c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.904185 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad7f4116-0c15-4b08-9edc-bacd65170a95-service-ca-bundle\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.904214 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-trusted-ca\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.904634 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.904810 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c20ebd50-0f39-4321-84c3-1806672c78c0-auth-proxy-config\") pod \"machine-approver-56656f9798-2g78z\" (UID: \"c20ebd50-0f39-4321-84c3-1806672c78c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905082 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad7f4116-0c15-4b08-9edc-bacd65170a95-service-ca-bundle\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.904256 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjp2x\" (UniqueName: \"kubernetes.io/projected/ad7f4116-0c15-4b08-9edc-bacd65170a95-kube-api-access-mjp2x\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905182 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905220 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7f4116-0c15-4b08-9edc-bacd65170a95-config\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905244 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-registry-tls\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905311 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea61811e-2455-4157-a3f3-1376f4a11e8c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fvpbt\" (UID: \"ea61811e-2455-4157-a3f3-1376f4a11e8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905346 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c20ebd50-0f39-4321-84c3-1806672c78c0-config\") pod \"machine-approver-56656f9798-2g78z\" (UID: \"c20ebd50-0f39-4321-84c3-1806672c78c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905379 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bab27dde-a537-445c-8d39-ad7479b66bcb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z9qzn\" (UID: \"bab27dde-a537-445c-8d39-ad7479b66bcb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905404 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2bx5\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-kube-api-access-k2bx5\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905431 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clvwv\" (UniqueName: \"kubernetes.io/projected/42aab7ad-1293-4b39-8199-0b7f944a8f31-kube-api-access-clvwv\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905525 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bab27dde-a537-445c-8d39-ad7479b66bcb-images\") pod \"machine-api-operator-5694c8668f-z9qzn\" (UID: \"bab27dde-a537-445c-8d39-ad7479b66bcb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905555 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-config\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905582 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42aab7ad-1293-4b39-8199-0b7f944a8f31-serving-cert\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905610 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab27dde-a537-445c-8d39-ad7479b66bcb-config\") pod \"machine-api-operator-5694c8668f-z9qzn\" (UID: \"bab27dde-a537-445c-8d39-ad7479b66bcb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905634 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn87t\" (UniqueName: \"kubernetes.io/projected/c20ebd50-0f39-4321-84c3-1806672c78c0-kube-api-access-bn87t\") pod \"machine-approver-56656f9798-2g78z\" (UID: \"c20ebd50-0f39-4321-84c3-1806672c78c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905658 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:56 crc kubenswrapper[4757]: E0129 15:12:56.905694 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:57.405677174 +0000 UTC m=+140.694927521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905727 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad7f4116-0c15-4b08-9edc-bacd65170a95-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905760 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-registry-certificates\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905795 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c20ebd50-0f39-4321-84c3-1806672c78c0-machine-approver-tls\") pod \"machine-approver-56656f9798-2g78z\" (UID: \"c20ebd50-0f39-4321-84c3-1806672c78c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905827 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905849 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9103bc-a2bb-4075-8454-c6f0af5c2c29-serving-cert\") pod \"openshift-config-operator-7777fb866f-hghqd\" (UID: \"9e9103bc-a2bb-4075-8454-c6f0af5c2c29\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905877 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-bound-sa-token\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.905896 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-client-ca\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.906499 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad7f4116-0c15-4b08-9edc-bacd65170a95-config\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.906936 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c20ebd50-0f39-4321-84c3-1806672c78c0-config\") pod \"machine-approver-56656f9798-2g78z\" (UID: \"c20ebd50-0f39-4321-84c3-1806672c78c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.907596 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad7f4116-0c15-4b08-9edc-bacd65170a95-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.908258 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad7f4116-0c15-4b08-9edc-bacd65170a95-serving-cert\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.909349 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab27dde-a537-445c-8d39-ad7479b66bcb-config\") pod \"machine-api-operator-5694c8668f-z9qzn\" (UID: \"bab27dde-a537-445c-8d39-ad7479b66bcb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.909937 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bab27dde-a537-445c-8d39-ad7479b66bcb-images\") pod \"machine-api-operator-5694c8668f-z9qzn\" (UID: \"bab27dde-a537-445c-8d39-ad7479b66bcb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.910872 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dkjl5"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.911138 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bab27dde-a537-445c-8d39-ad7479b66bcb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z9qzn\" (UID: \"bab27dde-a537-445c-8d39-ad7479b66bcb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.911212 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea61811e-2455-4157-a3f3-1376f4a11e8c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fvpbt\" (UID: \"ea61811e-2455-4157-a3f3-1376f4a11e8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.912759 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xfk54"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.915242 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.917866 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c20ebd50-0f39-4321-84c3-1806672c78c0-machine-approver-tls\") pod \"machine-approver-56656f9798-2g78z\" (UID: \"c20ebd50-0f39-4321-84c3-1806672c78c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.920232 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xfk54" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.922435 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.922518 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.923118 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xfk54"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.924175 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wsz9t"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.925548 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-m9f2c"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.926475 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m9f2c" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.932754 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m9f2c"] Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.936499 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.955819 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.975621 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 15:12:56 crc kubenswrapper[4757]: I0129 15:12:56.995034 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.006726 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.006861 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:57.506839659 +0000 UTC m=+140.796089896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.006915 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7add9ebb-c4ec-4eed-affb-bdd76b207c29-srv-cert\") pod \"olm-operator-6b444d44fb-dz9cf\" (UID: \"7add9ebb-c4ec-4eed-affb-bdd76b207c29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.006954 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-etcd-service-ca\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.006995 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8351b0cf-f243-4fe3-ba94-30f3ee17320e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2x58v\" (UID: \"8351b0cf-f243-4fe3-ba94-30f3ee17320e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.007077 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-csi-data-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.007104 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brpr8\" (UniqueName: \"kubernetes.io/projected/1e7cba3a-da69-495d-8f3c-286a75ca8e48-kube-api-access-brpr8\") pod \"dns-default-m9f2c\" (UID: \"1e7cba3a-da69-495d-8f3c-286a75ca8e48\") " pod="openshift-dns/dns-default-m9f2c" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.007143 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/69d5d601-aa72-4044-ad14-81c12a34c8f0-signing-cabundle\") pod \"service-ca-9c57cc56f-m7q76\" (UID: \"69d5d601-aa72-4044-ad14-81c12a34c8f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.007168 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-registry-tls\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.007191 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b0330c1-19bb-492e-815a-2827e5749d68-etcd-client\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.007328 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlr7r\" (UniqueName: \"kubernetes.io/projected/d68b032e-f86c-4928-a676-03c9e49c6722-kube-api-access-nlr7r\") pod \"marketplace-operator-79b997595-grbn4\" (UID: \"d68b032e-f86c-4928-a676-03c9e49c6722\") " pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.007718 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-etcd-ca\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.007741 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfgxh\" (UniqueName: \"kubernetes.io/projected/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-kube-api-access-bfgxh\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.007820 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/589345a6-68e3-4e06-bf66-b30c3457f59c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x9p7l\" (UID: \"589345a6-68e3-4e06-bf66-b30c3457f59c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.007905 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e93cae3-c9b6-493c-a8cc-c09cc83b0dca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-pw8nl\" (UID: \"1e93cae3-c9b6-493c-a8cc-c09cc83b0dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.008143 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/905f21b5-42ca-4558-b66c-b957fd41c9e8-tmpfs\") pod \"packageserver-d55dfcdfc-wnhtd\" (UID: \"905f21b5-42ca-4558-b66c-b957fd41c9e8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.008212 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.008242 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6vlw\" (UniqueName: \"kubernetes.io/projected/3e6ceaed-34b1-4c4f-abe3-96756d34e30f-kube-api-access-f6vlw\") pod \"downloads-7954f5f757-gs77j\" (UID: \"3e6ceaed-34b1-4c4f-abe3-96756d34e30f\") " pod="openshift-console/downloads-7954f5f757-gs77j" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.008294 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dd74c040-9e89-4c40-8e16-5ae6c0f6e65f-metrics-tls\") pod \"ingress-operator-5b745b69d9-tvv95\" (UID: \"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.008318 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-service-ca\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.008338 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e93cae3-c9b6-493c-a8cc-c09cc83b0dca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-pw8nl\" (UID: \"1e93cae3-c9b6-493c-a8cc-c09cc83b0dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.008383 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24h4l\" (UniqueName: \"kubernetes.io/projected/88566fd4-0a9f-42dd-a6d5-989dc7176aea-kube-api-access-24h4l\") pod \"migrator-59844c95c7-dkjl5\" (UID: \"88566fd4-0a9f-42dd-a6d5-989dc7176aea\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dkjl5" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.008405 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5ljf\" (UniqueName: \"kubernetes.io/projected/818a92e0-3e21-4f17-8950-a74066570368-kube-api-access-f5ljf\") pod \"openshift-controller-manager-operator-756b6f6bc6-bkltp\" (UID: \"818a92e0-3e21-4f17-8950-a74066570368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.008448 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009088 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42aab7ad-1293-4b39-8199-0b7f944a8f31-serving-cert\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009122 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-serving-cert\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009145 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/905f21b5-42ca-4558-b66c-b957fd41c9e8-apiservice-cert\") pod \"packageserver-d55dfcdfc-wnhtd\" (UID: \"905f21b5-42ca-4558-b66c-b957fd41c9e8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009191 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9dd1f071-c13e-42a1-80bd-81d4121b0cdc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lhw6r\" (UID: \"9dd1f071-c13e-42a1-80bd-81d4121b0cdc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009214 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/589345a6-68e3-4e06-bf66-b30c3457f59c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x9p7l\" (UID: \"589345a6-68e3-4e06-bf66-b30c3457f59c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009290 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmsld\" (UniqueName: \"kubernetes.io/projected/e9d54611-82e4-4698-b654-62a1d7144225-kube-api-access-zmsld\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009313 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrcsn\" (UniqueName: \"kubernetes.io/projected/0b0330c1-19bb-492e-815a-2827e5749d68-kube-api-access-lrcsn\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009333 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ace4ebfd-1a19-4556-a22e-d9cc9ce6d143-cert\") pod \"ingress-canary-xfk54\" (UID: \"ace4ebfd-1a19-4556-a22e-d9cc9ce6d143\") " pod="openshift-ingress-canary/ingress-canary-xfk54" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009377 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd74c040-9e89-4c40-8e16-5ae6c0f6e65f-trusted-ca\") pod \"ingress-operator-5b745b69d9-tvv95\" (UID: \"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009403 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8119cbd7-40a4-4875-b49c-1e982ec9acd8-serving-cert\") pod \"service-ca-operator-777779d784-rpcpn\" (UID: \"8119cbd7-40a4-4875-b49c-1e982ec9acd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009429 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009489 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ca0b207e-f487-4256-b01b-47aecb6921b6-certs\") pod \"machine-config-server-mfd9g\" (UID: \"ca0b207e-f487-4256-b01b-47aecb6921b6\") " pod="openshift-machine-config-operator/machine-config-server-mfd9g" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009538 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b10bc118-1493-4055-a8c2-1a1b9aca7c91-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qzc6g\" (UID: \"b10bc118-1493-4055-a8c2-1a1b9aca7c91\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009563 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/818a92e0-3e21-4f17-8950-a74066570368-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bkltp\" (UID: \"818a92e0-3e21-4f17-8950-a74066570368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009640 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-audit\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009697 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-trusted-ca-bundle\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009723 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2f42\" (UniqueName: \"kubernetes.io/projected/c8548b94-9099-42d5-914d-c2c10561bc5a-kube-api-access-j2f42\") pod \"collect-profiles-29494980-9zrww\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.009898 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-config\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.010057 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-config\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.010205 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.010237 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/69d5d601-aa72-4044-ad14-81c12a34c8f0-signing-key\") pod \"service-ca-9c57cc56f-m7q76\" (UID: \"69d5d601-aa72-4044-ad14-81c12a34c8f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.010298 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9dd1f071-c13e-42a1-80bd-81d4121b0cdc-images\") pod \"machine-config-operator-74547568cd-lhw6r\" (UID: \"9dd1f071-c13e-42a1-80bd-81d4121b0cdc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.016546 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f404652-4bd9-4720-b625-01ae3c2d29fa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jx6g5\" (UID: \"1f404652-4bd9-4720-b625-01ae3c2d29fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.016833 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08b2186f-939e-4005-9fd9-1f1cc7b087d8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nkstc\" (UID: \"08b2186f-939e-4005-9fd9-1f1cc7b087d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.016905 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9103bc-a2bb-4075-8454-c6f0af5c2c29-serving-cert\") pod \"openshift-config-operator-7777fb866f-hghqd\" (UID: \"9e9103bc-a2bb-4075-8454-c6f0af5c2c29\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.016941 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.016984 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8351b0cf-f243-4fe3-ba94-30f3ee17320e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2x58v\" (UID: \"8351b0cf-f243-4fe3-ba94-30f3ee17320e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.017019 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0330c1-19bb-492e-815a-2827e5749d68-serving-cert\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.017078 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.017698 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42aab7ad-1293-4b39-8199-0b7f944a8f31-serving-cert\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.017771 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-serving-cert\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.017859 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018118 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018172 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqjsp\" (UniqueName: \"kubernetes.io/projected/1e93cae3-c9b6-493c-a8cc-c09cc83b0dca-kube-api-access-hqjsp\") pod \"kube-storage-version-migrator-operator-b67b599dd-pw8nl\" (UID: \"1e93cae3-c9b6-493c-a8cc-c09cc83b0dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018240 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018338 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-stats-auth\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018376 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-etcd-serving-ca\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018405 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/616e840f-aaeb-48cc-b979-f690d54a8c95-metrics-tls\") pod \"dns-operator-744455d44c-pvt9r\" (UID: \"616e840f-aaeb-48cc-b979-f690d54a8c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-pvt9r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018442 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-socket-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018478 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/589345a6-68e3-4e06-bf66-b30c3457f59c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x9p7l\" (UID: \"589345a6-68e3-4e06-bf66-b30c3457f59c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018457 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-registry-tls\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018516 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018547 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bp6c\" (UniqueName: \"kubernetes.io/projected/9dd1f071-c13e-42a1-80bd-81d4121b0cdc-kube-api-access-6bp6c\") pod \"machine-config-operator-74547568cd-lhw6r\" (UID: \"9dd1f071-c13e-42a1-80bd-81d4121b0cdc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018816 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018821 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018864 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr74n\" (UniqueName: \"kubernetes.io/projected/9e9103bc-a2bb-4075-8454-c6f0af5c2c29-kube-api-access-wr74n\") pod \"openshift-config-operator-7777fb866f-hghqd\" (UID: \"9e9103bc-a2bb-4075-8454-c6f0af5c2c29\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018945 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9t8k\" (UniqueName: \"kubernetes.io/projected/616e840f-aaeb-48cc-b979-f690d54a8c95-kube-api-access-k9t8k\") pod \"dns-operator-744455d44c-pvt9r\" (UID: \"616e840f-aaeb-48cc-b979-f690d54a8c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-pvt9r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.018983 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbj7w\" (UniqueName: \"kubernetes.io/projected/69d5d601-aa72-4044-ad14-81c12a34c8f0-kube-api-access-hbj7w\") pod \"service-ca-9c57cc56f-m7q76\" (UID: \"69d5d601-aa72-4044-ad14-81c12a34c8f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019088 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntvkh\" (UniqueName: \"kubernetes.io/projected/da973e95-27c1-4f17-87e4-79bf0bc0e0fe-kube-api-access-ntvkh\") pod \"multus-admission-controller-857f4d67dd-trnpt\" (UID: \"da973e95-27c1-4f17-87e4-79bf0bc0e0fe\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-trnpt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019170 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0379bce9-e0c6-4283-8fb4-fcf300dc30bf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-plpn9\" (UID: \"0379bce9-e0c6-4283-8fb4-fcf300dc30bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019336 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f10cf2ea-d11c-422e-9f8e-b93d422df097-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ssg7r\" (UID: \"f10cf2ea-d11c-422e-9f8e-b93d422df097\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019356 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019406 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-config\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019437 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6be95c99-c279-4066-a0c6-b1499d8f7e07-srv-cert\") pod \"catalog-operator-68c6474976-n44qs\" (UID: \"6be95c99-c279-4066-a0c6-b1499d8f7e07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019477 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d68b032e-f86c-4928-a676-03c9e49c6722-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-grbn4\" (UID: \"d68b032e-f86c-4928-a676-03c9e49c6722\") " pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019559 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f404652-4bd9-4720-b625-01ae3c2d29fa-config\") pod \"kube-controller-manager-operator-78b949d7b-jx6g5\" (UID: \"1f404652-4bd9-4720-b625-01ae3c2d29fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019643 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e7cba3a-da69-495d-8f3c-286a75ca8e48-config-volume\") pod \"dns-default-m9f2c\" (UID: \"1e7cba3a-da69-495d-8f3c-286a75ca8e48\") " pod="openshift-dns/dns-default-m9f2c" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019791 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019817 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-trusted-ca\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019863 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/08b2186f-939e-4005-9fd9-1f1cc7b087d8-proxy-tls\") pod \"machine-config-controller-84d6567774-nkstc\" (UID: \"08b2186f-939e-4005-9fd9-1f1cc7b087d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019900 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kqpw\" (UniqueName: \"kubernetes.io/projected/905f21b5-42ca-4558-b66c-b957fd41c9e8-kube-api-access-8kqpw\") pod \"packageserver-d55dfcdfc-wnhtd\" (UID: \"905f21b5-42ca-4558-b66c-b957fd41c9e8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.019931 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1e7cba3a-da69-495d-8f3c-286a75ca8e48-metrics-tls\") pod \"dns-default-m9f2c\" (UID: \"1e7cba3a-da69-495d-8f3c-286a75ca8e48\") " pod="openshift-dns/dns-default-m9f2c" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.020300 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.020519 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b0330c1-19bb-492e-815a-2827e5749d68-encryption-config\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.020561 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7add9ebb-c4ec-4eed-affb-bdd76b207c29-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dz9cf\" (UID: \"7add9ebb-c4ec-4eed-affb-bdd76b207c29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.020586 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8351b0cf-f243-4fe3-ba94-30f3ee17320e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2x58v\" (UID: \"8351b0cf-f243-4fe3-ba94-30f3ee17320e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.020618 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcb581d1-4a29-4bf3-9df9-1669cf88e9f3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h7pw2\" (UID: \"bcb581d1-4a29-4bf3-9df9-1669cf88e9f3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.020658 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf8gx\" (UniqueName: \"kubernetes.io/projected/08b2186f-939e-4005-9fd9-1f1cc7b087d8-kube-api-access-jf8gx\") pod \"machine-config-controller-84d6567774-nkstc\" (UID: \"08b2186f-939e-4005-9fd9-1f1cc7b087d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.021243 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:57.521230084 +0000 UTC m=+140.810480321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.021924 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2bx5\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-kube-api-access-k2bx5\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.021959 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clvwv\" (UniqueName: \"kubernetes.io/projected/42aab7ad-1293-4b39-8199-0b7f944a8f31-kube-api-access-clvwv\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.022183 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8119cbd7-40a4-4875-b49c-1e982ec9acd8-config\") pod \"service-ca-operator-777779d784-rpcpn\" (UID: \"8119cbd7-40a4-4875-b49c-1e982ec9acd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.022207 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5122101-998b-48d5-ae6e-c4746b2ba055-service-ca-bundle\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.022233 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sczqk\" (UniqueName: \"kubernetes.io/projected/b10bc118-1493-4055-a8c2-1a1b9aca7c91-kube-api-access-sczqk\") pod \"control-plane-machine-set-operator-78cbb6b69f-qzc6g\" (UID: \"b10bc118-1493-4055-a8c2-1a1b9aca7c91\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.022733 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-config\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.022809 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8548b94-9099-42d5-914d-c2c10561bc5a-config-volume\") pod \"collect-profiles-29494980-9zrww\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.023112 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.023237 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/905f21b5-42ca-4558-b66c-b957fd41c9e8-webhook-cert\") pod \"packageserver-d55dfcdfc-wnhtd\" (UID: \"905f21b5-42ca-4558-b66c-b957fd41c9e8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.023487 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-audit-policies\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.024183 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-trusted-ca\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.024627 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-config\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.025478 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6be95c99-c279-4066-a0c6-b1499d8f7e07-profile-collector-cert\") pod \"catalog-operator-68c6474976-n44qs\" (UID: \"6be95c99-c279-4066-a0c6-b1499d8f7e07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.025586 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/818a92e0-3e21-4f17-8950-a74066570368-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bkltp\" (UID: \"818a92e0-3e21-4f17-8950-a74066570368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.025631 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb2nv\" (UniqueName: \"kubernetes.io/projected/6be95c99-c279-4066-a0c6-b1499d8f7e07-kube-api-access-kb2nv\") pod \"catalog-operator-68c6474976-n44qs\" (UID: \"6be95c99-c279-4066-a0c6-b1499d8f7e07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.025665 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf49j\" (UniqueName: \"kubernetes.io/projected/dd74c040-9e89-4c40-8e16-5ae6c0f6e65f-kube-api-access-rf49j\") pod \"ingress-operator-5b745b69d9-tvv95\" (UID: \"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.025709 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-registry-certificates\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.025902 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e9103bc-a2bb-4075-8454-c6f0af5c2c29-serving-cert\") pod \"openshift-config-operator-7777fb866f-hghqd\" (UID: \"9e9103bc-a2bb-4075-8454-c6f0af5c2c29\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.025917 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ca0b207e-f487-4256-b01b-47aecb6921b6-node-bootstrap-token\") pod \"machine-config-server-mfd9g\" (UID: \"ca0b207e-f487-4256-b01b-47aecb6921b6\") " pod="openshift-machine-config-operator/machine-config-server-mfd9g" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.026031 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd74c040-9e89-4c40-8e16-5ae6c0f6e65f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tvv95\" (UID: \"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.026092 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.026135 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krf9j\" (UniqueName: \"kubernetes.io/projected/f10cf2ea-d11c-422e-9f8e-b93d422df097-kube-api-access-krf9j\") pod \"package-server-manager-789f6589d5-ssg7r\" (UID: \"f10cf2ea-d11c-422e-9f8e-b93d422df097\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.026196 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d68b032e-f86c-4928-a676-03c9e49c6722-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-grbn4\" (UID: \"d68b032e-f86c-4928-a676-03c9e49c6722\") " pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.026234 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8548b94-9099-42d5-914d-c2c10561bc5a-secret-volume\") pod \"collect-profiles-29494980-9zrww\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.026293 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-plugins-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.026326 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwbsq\" (UniqueName: \"kubernetes.io/projected/ace4ebfd-1a19-4556-a22e-d9cc9ce6d143-kube-api-access-hwbsq\") pod \"ingress-canary-xfk54\" (UID: \"ace4ebfd-1a19-4556-a22e-d9cc9ce6d143\") " pod="openshift-ingress-canary/ingress-canary-xfk54" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.026415 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9d54611-82e4-4698-b654-62a1d7144225-audit-dir\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.026464 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0379bce9-e0c6-4283-8fb4-fcf300dc30bf-config\") pod \"kube-apiserver-operator-766d6c64bb-plpn9\" (UID: \"0379bce9-e0c6-4283-8fb4-fcf300dc30bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.027017 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-etcd-client\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.027081 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-client-ca\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.027143 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk4ds\" (UniqueName: \"kubernetes.io/projected/ca0b207e-f487-4256-b01b-47aecb6921b6-kube-api-access-vk4ds\") pod \"machine-config-server-mfd9g\" (UID: \"ca0b207e-f487-4256-b01b-47aecb6921b6\") " pod="openshift-machine-config-operator/machine-config-server-mfd9g" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.027200 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnbdj\" (UniqueName: \"kubernetes.io/projected/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-kube-api-access-vnbdj\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.027862 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-registry-certificates\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.027879 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-bound-sa-token\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.027963 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-client-ca\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.030910 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.030975 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-image-import-ca\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031011 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb581d1-4a29-4bf3-9df9-1669cf88e9f3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h7pw2\" (UID: \"bcb581d1-4a29-4bf3-9df9-1669cf88e9f3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031033 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f404652-4bd9-4720-b625-01ae3c2d29fa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jx6g5\" (UID: \"1f404652-4bd9-4720-b625-01ae3c2d29fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031062 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9e9103bc-a2bb-4075-8454-c6f0af5c2c29-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hghqd\" (UID: \"9e9103bc-a2bb-4075-8454-c6f0af5c2c29\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031112 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-mountpoint-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031133 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfckq\" (UniqueName: \"kubernetes.io/projected/8119cbd7-40a4-4875-b49c-1e982ec9acd8-kube-api-access-sfckq\") pod \"service-ca-operator-777779d784-rpcpn\" (UID: \"8119cbd7-40a4-4875-b49c-1e982ec9acd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031168 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmsm6\" (UniqueName: \"kubernetes.io/projected/bcb581d1-4a29-4bf3-9df9-1669cf88e9f3-kube-api-access-nmsm6\") pod \"openshift-apiserver-operator-796bbdcf4f-h7pw2\" (UID: \"bcb581d1-4a29-4bf3-9df9-1669cf88e9f3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031217 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-default-certificate\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031244 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntmc8\" (UniqueName: \"kubernetes.io/projected/a5122101-998b-48d5-ae6e-c4746b2ba055-kube-api-access-ntmc8\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031286 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031319 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-registration-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031415 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9dd1f071-c13e-42a1-80bd-81d4121b0cdc-proxy-tls\") pod \"machine-config-operator-74547568cd-lhw6r\" (UID: \"9dd1f071-c13e-42a1-80bd-81d4121b0cdc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031692 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dngr\" (UniqueName: \"kubernetes.io/projected/7add9ebb-c4ec-4eed-affb-bdd76b207c29-kube-api-access-6dngr\") pod \"olm-operator-6b444d44fb-dz9cf\" (UID: \"7add9ebb-c4ec-4eed-affb-bdd76b207c29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031747 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da973e95-27c1-4f17-87e4-79bf0bc0e0fe-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-trnpt\" (UID: \"da973e95-27c1-4f17-87e4-79bf0bc0e0fe\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-trnpt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031780 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9e9103bc-a2bb-4075-8454-c6f0af5c2c29-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hghqd\" (UID: \"9e9103bc-a2bb-4075-8454-c6f0af5c2c29\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.031779 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.032594 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr5xt\" (UniqueName: \"kubernetes.io/projected/589345a6-68e3-4e06-bf66-b30c3457f59c-kube-api-access-pr5xt\") pod \"cluster-image-registry-operator-dc59b4c8b-x9p7l\" (UID: \"589345a6-68e3-4e06-bf66-b30c3457f59c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.032631 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0379bce9-e0c6-4283-8fb4-fcf300dc30bf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-plpn9\" (UID: \"0379bce9-e0c6-4283-8fb4-fcf300dc30bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.032655 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-oauth-serving-cert\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.032674 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcp8m\" (UniqueName: \"kubernetes.io/projected/a0f71154-b1ff-4e61-9c93-8bcb95678bce-kube-api-access-kcp8m\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.032716 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0b0330c1-19bb-492e-815a-2827e5749d68-node-pullsecrets\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.032751 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b0330c1-19bb-492e-815a-2827e5749d68-audit-dir\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.033085 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-metrics-certs\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.033138 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-oauth-config\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.035612 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.055167 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.075370 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.095957 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.115084 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.134407 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.134524 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:57.634500265 +0000 UTC m=+140.923750522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.136208 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.136443 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08b2186f-939e-4005-9fd9-1f1cc7b087d8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nkstc\" (UID: \"08b2186f-939e-4005-9fd9-1f1cc7b087d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.136551 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.136597 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/69d5d601-aa72-4044-ad14-81c12a34c8f0-signing-key\") pod \"service-ca-9c57cc56f-m7q76\" (UID: \"69d5d601-aa72-4044-ad14-81c12a34c8f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.136633 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9dd1f071-c13e-42a1-80bd-81d4121b0cdc-images\") pod \"machine-config-operator-74547568cd-lhw6r\" (UID: \"9dd1f071-c13e-42a1-80bd-81d4121b0cdc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.136666 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f404652-4bd9-4720-b625-01ae3c2d29fa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jx6g5\" (UID: \"1f404652-4bd9-4720-b625-01ae3c2d29fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.136701 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.136731 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8351b0cf-f243-4fe3-ba94-30f3ee17320e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2x58v\" (UID: \"8351b0cf-f243-4fe3-ba94-30f3ee17320e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.136764 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0330c1-19bb-492e-815a-2827e5749d68-serving-cert\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.136797 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-serving-cert\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.136860 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.137079 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.137171 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjsp\" (UniqueName: \"kubernetes.io/projected/1e93cae3-c9b6-493c-a8cc-c09cc83b0dca-kube-api-access-hqjsp\") pod \"kube-storage-version-migrator-operator-b67b599dd-pw8nl\" (UID: \"1e93cae3-c9b6-493c-a8cc-c09cc83b0dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.137216 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-stats-auth\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.137327 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-etcd-serving-ca\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.137393 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/616e840f-aaeb-48cc-b979-f690d54a8c95-metrics-tls\") pod \"dns-operator-744455d44c-pvt9r\" (UID: \"616e840f-aaeb-48cc-b979-f690d54a8c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-pvt9r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.137431 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/589345a6-68e3-4e06-bf66-b30c3457f59c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x9p7l\" (UID: \"589345a6-68e3-4e06-bf66-b30c3457f59c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.137552 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-socket-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.138105 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/08b2186f-939e-4005-9fd9-1f1cc7b087d8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nkstc\" (UID: \"08b2186f-939e-4005-9fd9-1f1cc7b087d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.139045 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-etcd-serving-ca\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.139329 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-socket-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.139491 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.139549 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bp6c\" (UniqueName: \"kubernetes.io/projected/9dd1f071-c13e-42a1-80bd-81d4121b0cdc-kube-api-access-6bp6c\") pod \"machine-config-operator-74547568cd-lhw6r\" (UID: \"9dd1f071-c13e-42a1-80bd-81d4121b0cdc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.139599 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.139645 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntvkh\" (UniqueName: \"kubernetes.io/projected/da973e95-27c1-4f17-87e4-79bf0bc0e0fe-kube-api-access-ntvkh\") pod \"multus-admission-controller-857f4d67dd-trnpt\" (UID: \"da973e95-27c1-4f17-87e4-79bf0bc0e0fe\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-trnpt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.139679 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9t8k\" (UniqueName: \"kubernetes.io/projected/616e840f-aaeb-48cc-b979-f690d54a8c95-kube-api-access-k9t8k\") pod \"dns-operator-744455d44c-pvt9r\" (UID: \"616e840f-aaeb-48cc-b979-f690d54a8c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-pvt9r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.139710 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbj7w\" (UniqueName: \"kubernetes.io/projected/69d5d601-aa72-4044-ad14-81c12a34c8f0-kube-api-access-hbj7w\") pod \"service-ca-9c57cc56f-m7q76\" (UID: \"69d5d601-aa72-4044-ad14-81c12a34c8f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.139744 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f10cf2ea-d11c-422e-9f8e-b93d422df097-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ssg7r\" (UID: \"f10cf2ea-d11c-422e-9f8e-b93d422df097\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.139777 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0379bce9-e0c6-4283-8fb4-fcf300dc30bf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-plpn9\" (UID: \"0379bce9-e0c6-4283-8fb4-fcf300dc30bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.139839 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d68b032e-f86c-4928-a676-03c9e49c6722-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-grbn4\" (UID: \"d68b032e-f86c-4928-a676-03c9e49c6722\") " pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.139896 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-config\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.139958 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6be95c99-c279-4066-a0c6-b1499d8f7e07-srv-cert\") pod \"catalog-operator-68c6474976-n44qs\" (UID: \"6be95c99-c279-4066-a0c6-b1499d8f7e07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140021 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e7cba3a-da69-495d-8f3c-286a75ca8e48-config-volume\") pod \"dns-default-m9f2c\" (UID: \"1e7cba3a-da69-495d-8f3c-286a75ca8e48\") " pod="openshift-dns/dns-default-m9f2c" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140068 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f404652-4bd9-4720-b625-01ae3c2d29fa-config\") pod \"kube-controller-manager-operator-78b949d7b-jx6g5\" (UID: \"1f404652-4bd9-4720-b625-01ae3c2d29fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140117 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/08b2186f-939e-4005-9fd9-1f1cc7b087d8-proxy-tls\") pod \"machine-config-controller-84d6567774-nkstc\" (UID: \"08b2186f-939e-4005-9fd9-1f1cc7b087d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140162 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kqpw\" (UniqueName: \"kubernetes.io/projected/905f21b5-42ca-4558-b66c-b957fd41c9e8-kube-api-access-8kqpw\") pod \"packageserver-d55dfcdfc-wnhtd\" (UID: \"905f21b5-42ca-4558-b66c-b957fd41c9e8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140193 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1e7cba3a-da69-495d-8f3c-286a75ca8e48-metrics-tls\") pod \"dns-default-m9f2c\" (UID: \"1e7cba3a-da69-495d-8f3c-286a75ca8e48\") " pod="openshift-dns/dns-default-m9f2c" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140250 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140298 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140335 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b0330c1-19bb-492e-815a-2827e5749d68-encryption-config\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140387 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7add9ebb-c4ec-4eed-affb-bdd76b207c29-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dz9cf\" (UID: \"7add9ebb-c4ec-4eed-affb-bdd76b207c29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140439 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8351b0cf-f243-4fe3-ba94-30f3ee17320e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2x58v\" (UID: \"8351b0cf-f243-4fe3-ba94-30f3ee17320e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140486 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcb581d1-4a29-4bf3-9df9-1669cf88e9f3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h7pw2\" (UID: \"bcb581d1-4a29-4bf3-9df9-1669cf88e9f3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140542 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf8gx\" (UniqueName: \"kubernetes.io/projected/08b2186f-939e-4005-9fd9-1f1cc7b087d8-kube-api-access-jf8gx\") pod \"machine-config-controller-84d6567774-nkstc\" (UID: \"08b2186f-939e-4005-9fd9-1f1cc7b087d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140658 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8119cbd7-40a4-4875-b49c-1e982ec9acd8-config\") pod \"service-ca-operator-777779d784-rpcpn\" (UID: \"8119cbd7-40a4-4875-b49c-1e982ec9acd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140709 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5122101-998b-48d5-ae6e-c4746b2ba055-service-ca-bundle\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140744 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sczqk\" (UniqueName: \"kubernetes.io/projected/b10bc118-1493-4055-a8c2-1a1b9aca7c91-kube-api-access-sczqk\") pod \"control-plane-machine-set-operator-78cbb6b69f-qzc6g\" (UID: \"b10bc118-1493-4055-a8c2-1a1b9aca7c91\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140810 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8548b94-9099-42d5-914d-c2c10561bc5a-config-volume\") pod \"collect-profiles-29494980-9zrww\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140843 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6be95c99-c279-4066-a0c6-b1499d8f7e07-profile-collector-cert\") pod \"catalog-operator-68c6474976-n44qs\" (UID: \"6be95c99-c279-4066-a0c6-b1499d8f7e07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140879 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/905f21b5-42ca-4558-b66c-b957fd41c9e8-webhook-cert\") pod \"packageserver-d55dfcdfc-wnhtd\" (UID: \"905f21b5-42ca-4558-b66c-b957fd41c9e8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140914 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-audit-policies\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140958 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/818a92e0-3e21-4f17-8950-a74066570368-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bkltp\" (UID: \"818a92e0-3e21-4f17-8950-a74066570368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.140991 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb2nv\" (UniqueName: \"kubernetes.io/projected/6be95c99-c279-4066-a0c6-b1499d8f7e07-kube-api-access-kb2nv\") pod \"catalog-operator-68c6474976-n44qs\" (UID: \"6be95c99-c279-4066-a0c6-b1499d8f7e07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141023 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf49j\" (UniqueName: \"kubernetes.io/projected/dd74c040-9e89-4c40-8e16-5ae6c0f6e65f-kube-api-access-rf49j\") pod \"ingress-operator-5b745b69d9-tvv95\" (UID: \"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141059 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ca0b207e-f487-4256-b01b-47aecb6921b6-node-bootstrap-token\") pod \"machine-config-server-mfd9g\" (UID: \"ca0b207e-f487-4256-b01b-47aecb6921b6\") " pod="openshift-machine-config-operator/machine-config-server-mfd9g" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141091 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd74c040-9e89-4c40-8e16-5ae6c0f6e65f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tvv95\" (UID: \"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141122 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141156 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krf9j\" (UniqueName: \"kubernetes.io/projected/f10cf2ea-d11c-422e-9f8e-b93d422df097-kube-api-access-krf9j\") pod \"package-server-manager-789f6589d5-ssg7r\" (UID: \"f10cf2ea-d11c-422e-9f8e-b93d422df097\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141164 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-config\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141192 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9d54611-82e4-4698-b654-62a1d7144225-audit-dir\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141226 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d68b032e-f86c-4928-a676-03c9e49c6722-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-grbn4\" (UID: \"d68b032e-f86c-4928-a676-03c9e49c6722\") " pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141258 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8548b94-9099-42d5-914d-c2c10561bc5a-secret-volume\") pod \"collect-profiles-29494980-9zrww\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141333 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-plugins-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141378 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwbsq\" (UniqueName: \"kubernetes.io/projected/ace4ebfd-1a19-4556-a22e-d9cc9ce6d143-kube-api-access-hwbsq\") pod \"ingress-canary-xfk54\" (UID: \"ace4ebfd-1a19-4556-a22e-d9cc9ce6d143\") " pod="openshift-ingress-canary/ingress-canary-xfk54" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141407 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f404652-4bd9-4720-b625-01ae3c2d29fa-config\") pod \"kube-controller-manager-operator-78b949d7b-jx6g5\" (UID: \"1f404652-4bd9-4720-b625-01ae3c2d29fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141425 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0379bce9-e0c6-4283-8fb4-fcf300dc30bf-config\") pod \"kube-apiserver-operator-766d6c64bb-plpn9\" (UID: \"0379bce9-e0c6-4283-8fb4-fcf300dc30bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141476 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-etcd-client\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141523 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk4ds\" (UniqueName: \"kubernetes.io/projected/ca0b207e-f487-4256-b01b-47aecb6921b6-kube-api-access-vk4ds\") pod \"machine-config-server-mfd9g\" (UID: \"ca0b207e-f487-4256-b01b-47aecb6921b6\") " pod="openshift-machine-config-operator/machine-config-server-mfd9g" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141544 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnbdj\" (UniqueName: \"kubernetes.io/projected/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-kube-api-access-vnbdj\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141564 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141583 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-image-import-ca\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.141610 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:57.641586679 +0000 UTC m=+140.930837026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141664 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb581d1-4a29-4bf3-9df9-1669cf88e9f3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h7pw2\" (UID: \"bcb581d1-4a29-4bf3-9df9-1669cf88e9f3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141701 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f404652-4bd9-4720-b625-01ae3c2d29fa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jx6g5\" (UID: \"1f404652-4bd9-4720-b625-01ae3c2d29fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141745 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-mountpoint-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141774 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfckq\" (UniqueName: \"kubernetes.io/projected/8119cbd7-40a4-4875-b49c-1e982ec9acd8-kube-api-access-sfckq\") pod \"service-ca-operator-777779d784-rpcpn\" (UID: \"8119cbd7-40a4-4875-b49c-1e982ec9acd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141810 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntmc8\" (UniqueName: \"kubernetes.io/projected/a5122101-998b-48d5-ae6e-c4746b2ba055-kube-api-access-ntmc8\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141834 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmsm6\" (UniqueName: \"kubernetes.io/projected/bcb581d1-4a29-4bf3-9df9-1669cf88e9f3-kube-api-access-nmsm6\") pod \"openshift-apiserver-operator-796bbdcf4f-h7pw2\" (UID: \"bcb581d1-4a29-4bf3-9df9-1669cf88e9f3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141859 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-default-certificate\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141882 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141908 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-registration-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141932 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9dd1f071-c13e-42a1-80bd-81d4121b0cdc-proxy-tls\") pod \"machine-config-operator-74547568cd-lhw6r\" (UID: \"9dd1f071-c13e-42a1-80bd-81d4121b0cdc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141972 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.141993 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dngr\" (UniqueName: \"kubernetes.io/projected/7add9ebb-c4ec-4eed-affb-bdd76b207c29-kube-api-access-6dngr\") pod \"olm-operator-6b444d44fb-dz9cf\" (UID: \"7add9ebb-c4ec-4eed-affb-bdd76b207c29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142013 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da973e95-27c1-4f17-87e4-79bf0bc0e0fe-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-trnpt\" (UID: \"da973e95-27c1-4f17-87e4-79bf0bc0e0fe\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-trnpt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142057 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr5xt\" (UniqueName: \"kubernetes.io/projected/589345a6-68e3-4e06-bf66-b30c3457f59c-kube-api-access-pr5xt\") pod \"cluster-image-registry-operator-dc59b4c8b-x9p7l\" (UID: \"589345a6-68e3-4e06-bf66-b30c3457f59c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142082 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0379bce9-e0c6-4283-8fb4-fcf300dc30bf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-plpn9\" (UID: \"0379bce9-e0c6-4283-8fb4-fcf300dc30bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142106 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-oauth-config\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142133 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-oauth-serving-cert\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142155 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcp8m\" (UniqueName: \"kubernetes.io/projected/a0f71154-b1ff-4e61-9c93-8bcb95678bce-kube-api-access-kcp8m\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142175 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0b0330c1-19bb-492e-815a-2827e5749d68-node-pullsecrets\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142197 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b0330c1-19bb-492e-815a-2827e5749d68-audit-dir\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142219 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-metrics-certs\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142239 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7add9ebb-c4ec-4eed-affb-bdd76b207c29-srv-cert\") pod \"olm-operator-6b444d44fb-dz9cf\" (UID: \"7add9ebb-c4ec-4eed-affb-bdd76b207c29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142291 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-etcd-service-ca\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142319 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8351b0cf-f243-4fe3-ba94-30f3ee17320e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2x58v\" (UID: \"8351b0cf-f243-4fe3-ba94-30f3ee17320e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142357 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/69d5d601-aa72-4044-ad14-81c12a34c8f0-signing-cabundle\") pod \"service-ca-9c57cc56f-m7q76\" (UID: \"69d5d601-aa72-4044-ad14-81c12a34c8f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142379 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-csi-data-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142400 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brpr8\" (UniqueName: \"kubernetes.io/projected/1e7cba3a-da69-495d-8f3c-286a75ca8e48-kube-api-access-brpr8\") pod \"dns-default-m9f2c\" (UID: \"1e7cba3a-da69-495d-8f3c-286a75ca8e48\") " pod="openshift-dns/dns-default-m9f2c" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142647 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b0330c1-19bb-492e-815a-2827e5749d68-etcd-client\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142672 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlr7r\" (UniqueName: \"kubernetes.io/projected/d68b032e-f86c-4928-a676-03c9e49c6722-kube-api-access-nlr7r\") pod \"marketplace-operator-79b997595-grbn4\" (UID: \"d68b032e-f86c-4928-a676-03c9e49c6722\") " pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142694 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-etcd-ca\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142719 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfgxh\" (UniqueName: \"kubernetes.io/projected/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-kube-api-access-bfgxh\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142741 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/589345a6-68e3-4e06-bf66-b30c3457f59c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x9p7l\" (UID: \"589345a6-68e3-4e06-bf66-b30c3457f59c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142766 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/905f21b5-42ca-4558-b66c-b957fd41c9e8-tmpfs\") pod \"packageserver-d55dfcdfc-wnhtd\" (UID: \"905f21b5-42ca-4558-b66c-b957fd41c9e8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142789 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e93cae3-c9b6-493c-a8cc-c09cc83b0dca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-pw8nl\" (UID: \"1e93cae3-c9b6-493c-a8cc-c09cc83b0dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142814 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142842 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6vlw\" (UniqueName: \"kubernetes.io/projected/3e6ceaed-34b1-4c4f-abe3-96756d34e30f-kube-api-access-f6vlw\") pod \"downloads-7954f5f757-gs77j\" (UID: \"3e6ceaed-34b1-4c4f-abe3-96756d34e30f\") " pod="openshift-console/downloads-7954f5f757-gs77j" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142864 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dd74c040-9e89-4c40-8e16-5ae6c0f6e65f-metrics-tls\") pod \"ingress-operator-5b745b69d9-tvv95\" (UID: \"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142888 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-service-ca\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142914 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e93cae3-c9b6-493c-a8cc-c09cc83b0dca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-pw8nl\" (UID: \"1e93cae3-c9b6-493c-a8cc-c09cc83b0dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142939 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24h4l\" (UniqueName: \"kubernetes.io/projected/88566fd4-0a9f-42dd-a6d5-989dc7176aea-kube-api-access-24h4l\") pod \"migrator-59844c95c7-dkjl5\" (UID: \"88566fd4-0a9f-42dd-a6d5-989dc7176aea\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dkjl5" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142963 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5ljf\" (UniqueName: \"kubernetes.io/projected/818a92e0-3e21-4f17-8950-a74066570368-kube-api-access-f5ljf\") pod \"openshift-controller-manager-operator-756b6f6bc6-bkltp\" (UID: \"818a92e0-3e21-4f17-8950-a74066570368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142992 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143021 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-serving-cert\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143048 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/905f21b5-42ca-4558-b66c-b957fd41c9e8-apiservice-cert\") pod \"packageserver-d55dfcdfc-wnhtd\" (UID: \"905f21b5-42ca-4558-b66c-b957fd41c9e8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143071 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9dd1f071-c13e-42a1-80bd-81d4121b0cdc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lhw6r\" (UID: \"9dd1f071-c13e-42a1-80bd-81d4121b0cdc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143095 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/589345a6-68e3-4e06-bf66-b30c3457f59c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x9p7l\" (UID: \"589345a6-68e3-4e06-bf66-b30c3457f59c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143116 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd74c040-9e89-4c40-8e16-5ae6c0f6e65f-trusted-ca\") pod \"ingress-operator-5b745b69d9-tvv95\" (UID: \"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143137 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmsld\" (UniqueName: \"kubernetes.io/projected/e9d54611-82e4-4698-b654-62a1d7144225-kube-api-access-zmsld\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143158 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrcsn\" (UniqueName: \"kubernetes.io/projected/0b0330c1-19bb-492e-815a-2827e5749d68-kube-api-access-lrcsn\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143178 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ace4ebfd-1a19-4556-a22e-d9cc9ce6d143-cert\") pod \"ingress-canary-xfk54\" (UID: \"ace4ebfd-1a19-4556-a22e-d9cc9ce6d143\") " pod="openshift-ingress-canary/ingress-canary-xfk54" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143200 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8119cbd7-40a4-4875-b49c-1e982ec9acd8-serving-cert\") pod \"service-ca-operator-777779d784-rpcpn\" (UID: \"8119cbd7-40a4-4875-b49c-1e982ec9acd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143222 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ca0b207e-f487-4256-b01b-47aecb6921b6-certs\") pod \"machine-config-server-mfd9g\" (UID: \"ca0b207e-f487-4256-b01b-47aecb6921b6\") " pod="openshift-machine-config-operator/machine-config-server-mfd9g" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143245 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b10bc118-1493-4055-a8c2-1a1b9aca7c91-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qzc6g\" (UID: \"b10bc118-1493-4055-a8c2-1a1b9aca7c91\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143283 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/818a92e0-3e21-4f17-8950-a74066570368-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bkltp\" (UID: \"818a92e0-3e21-4f17-8950-a74066570368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143312 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2f42\" (UniqueName: \"kubernetes.io/projected/c8548b94-9099-42d5-914d-c2c10561bc5a-kube-api-access-j2f42\") pod \"collect-profiles-29494980-9zrww\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143334 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-audit\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143357 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-trusted-ca-bundle\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143378 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-config\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143399 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-config\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143500 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f10cf2ea-d11c-422e-9f8e-b93d422df097-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ssg7r\" (UID: \"f10cf2ea-d11c-422e-9f8e-b93d422df097\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.143926 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9d54611-82e4-4698-b654-62a1d7144225-audit-dir\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.144642 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-registration-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.145087 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0b0330c1-19bb-492e-815a-2827e5749d68-encryption-config\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.145217 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-mountpoint-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.145243 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-plugins-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.146000 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/616e840f-aaeb-48cc-b979-f690d54a8c95-metrics-tls\") pod \"dns-operator-744455d44c-pvt9r\" (UID: \"616e840f-aaeb-48cc-b979-f690d54a8c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-pvt9r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.146082 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/589345a6-68e3-4e06-bf66-b30c3457f59c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x9p7l\" (UID: \"589345a6-68e3-4e06-bf66-b30c3457f59c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.146301 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0330c1-19bb-492e-815a-2827e5749d68-serving-cert\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.146395 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bcb581d1-4a29-4bf3-9df9-1669cf88e9f3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h7pw2\" (UID: \"bcb581d1-4a29-4bf3-9df9-1669cf88e9f3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.146816 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d68b032e-f86c-4928-a676-03c9e49c6722-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-grbn4\" (UID: \"d68b032e-f86c-4928-a676-03c9e49c6722\") " pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.146857 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-serving-cert\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.147552 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-oauth-serving-cert\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.147658 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0b0330c1-19bb-492e-815a-2827e5749d68-node-pullsecrets\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.147689 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b0330c1-19bb-492e-815a-2827e5749d68-audit-dir\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.147770 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-service-ca\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.148316 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/818a92e0-3e21-4f17-8950-a74066570368-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bkltp\" (UID: \"818a92e0-3e21-4f17-8950-a74066570368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.148885 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8351b0cf-f243-4fe3-ba94-30f3ee17320e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2x58v\" (UID: \"8351b0cf-f243-4fe3-ba94-30f3ee17320e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.142359 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-image-import-ca\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.149044 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-csi-data-dir\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.149319 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/589345a6-68e3-4e06-bf66-b30c3457f59c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x9p7l\" (UID: \"589345a6-68e3-4e06-bf66-b30c3457f59c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.149572 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-config\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.149602 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcb581d1-4a29-4bf3-9df9-1669cf88e9f3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h7pw2\" (UID: \"bcb581d1-4a29-4bf3-9df9-1669cf88e9f3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.150169 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-etcd-service-ca\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.150248 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f404652-4bd9-4720-b625-01ae3c2d29fa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jx6g5\" (UID: \"1f404652-4bd9-4720-b625-01ae3c2d29fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.150579 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/818a92e0-3e21-4f17-8950-a74066570368-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bkltp\" (UID: \"818a92e0-3e21-4f17-8950-a74066570368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.150841 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/905f21b5-42ca-4558-b66c-b957fd41c9e8-tmpfs\") pod \"packageserver-d55dfcdfc-wnhtd\" (UID: \"905f21b5-42ca-4558-b66c-b957fd41c9e8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.150915 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0b0330c1-19bb-492e-815a-2827e5749d68-audit\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.151065 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-serving-cert\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.151148 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-etcd-ca\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.151395 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-etcd-client\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.151980 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9dd1f071-c13e-42a1-80bd-81d4121b0cdc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-lhw6r\" (UID: \"9dd1f071-c13e-42a1-80bd-81d4121b0cdc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.152018 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-config\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.152119 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-trusted-ca-bundle\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.153516 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/905f21b5-42ca-4558-b66c-b957fd41c9e8-apiservice-cert\") pod \"packageserver-d55dfcdfc-wnhtd\" (UID: \"905f21b5-42ca-4558-b66c-b957fd41c9e8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.153626 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/da973e95-27c1-4f17-87e4-79bf0bc0e0fe-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-trnpt\" (UID: \"da973e95-27c1-4f17-87e4-79bf0bc0e0fe\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-trnpt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.153921 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8351b0cf-f243-4fe3-ba94-30f3ee17320e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2x58v\" (UID: \"8351b0cf-f243-4fe3-ba94-30f3ee17320e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.156552 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0b0330c1-19bb-492e-815a-2827e5749d68-etcd-client\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.157240 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-oauth-config\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.157534 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/905f21b5-42ca-4558-b66c-b957fd41c9e8-webhook-cert\") pod \"packageserver-d55dfcdfc-wnhtd\" (UID: \"905f21b5-42ca-4558-b66c-b957fd41c9e8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.164095 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.165869 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d68b032e-f86c-4928-a676-03c9e49c6722-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-grbn4\" (UID: \"d68b032e-f86c-4928-a676-03c9e49c6722\") " pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.175257 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.183332 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e93cae3-c9b6-493c-a8cc-c09cc83b0dca-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-pw8nl\" (UID: \"1e93cae3-c9b6-493c-a8cc-c09cc83b0dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.195466 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.215623 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.235216 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.244242 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.244396 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:57.744375694 +0000 UTC m=+141.033625931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.244612 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.244982 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:57.744974522 +0000 UTC m=+141.034224759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.256224 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.264026 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0379bce9-e0c6-4283-8fb4-fcf300dc30bf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-plpn9\" (UID: \"0379bce9-e0c6-4283-8fb4-fcf300dc30bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.276019 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.282797 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0379bce9-e0c6-4283-8fb4-fcf300dc30bf-config\") pod \"kube-apiserver-operator-766d6c64bb-plpn9\" (UID: \"0379bce9-e0c6-4283-8fb4-fcf300dc30bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.295722 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.335335 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.346017 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.346226 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:57.846200029 +0000 UTC m=+141.135450266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.346578 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.346892 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:57.84687571 +0000 UTC m=+141.136125967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.356009 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.366859 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.375440 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.380667 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.404298 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.408744 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.422095 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.435576 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.437915 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.440096 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.448113 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.448948 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:57.948933542 +0000 UTC m=+141.238183779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.455309 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.467315 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.475584 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.494992 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.506426 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.515919 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.523813 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.535399 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.544078 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-audit-policies\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.550035 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.550457 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.050444428 +0000 UTC m=+141.339694665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.555202 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.561012 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.580084 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.589011 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.595533 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.615739 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.619609 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.635061 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.641434 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e93cae3-c9b6-493c-a8cc-c09cc83b0dca-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-pw8nl\" (UID: \"1e93cae3-c9b6-493c-a8cc-c09cc83b0dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.651503 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.651672 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.151650545 +0000 UTC m=+141.440900782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.652051 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.652414 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.152403137 +0000 UTC m=+141.441653374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.656977 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.664170 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ca0b207e-f487-4256-b01b-47aecb6921b6-certs\") pod \"machine-config-server-mfd9g\" (UID: \"ca0b207e-f487-4256-b01b-47aecb6921b6\") " pod="openshift-machine-config-operator/machine-config-server-mfd9g" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.675542 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.695224 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.715060 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.724230 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8119cbd7-40a4-4875-b49c-1e982ec9acd8-serving-cert\") pod \"service-ca-operator-777779d784-rpcpn\" (UID: \"8119cbd7-40a4-4875-b49c-1e982ec9acd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.734944 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.744877 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8119cbd7-40a4-4875-b49c-1e982ec9acd8-config\") pod \"service-ca-operator-777779d784-rpcpn\" (UID: \"8119cbd7-40a4-4875-b49c-1e982ec9acd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.752712 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.752944 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.252912613 +0000 UTC m=+141.542162900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.753911 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.754303 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.254295095 +0000 UTC m=+141.543545332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.755012 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.776404 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.794067 4757 request.go:700] Waited for 1.006934152s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.795860 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.815692 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.820779 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/69d5d601-aa72-4044-ad14-81c12a34c8f0-signing-key\") pod \"service-ca-9c57cc56f-m7q76\" (UID: \"69d5d601-aa72-4044-ad14-81c12a34c8f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.835325 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.841006 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/69d5d601-aa72-4044-ad14-81c12a34c8f0-signing-cabundle\") pod \"service-ca-9c57cc56f-m7q76\" (UID: \"69d5d601-aa72-4044-ad14-81c12a34c8f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.862962 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.866363 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.363045779 +0000 UTC m=+141.652296016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.866926 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.867361 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.367301958 +0000 UTC m=+141.656552235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.867670 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.876369 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.884036 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ca0b207e-f487-4256-b01b-47aecb6921b6-node-bootstrap-token\") pod \"machine-config-server-mfd9g\" (UID: \"ca0b207e-f487-4256-b01b-47aecb6921b6\") " pod="openshift-machine-config-operator/machine-config-server-mfd9g" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.896071 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.916016 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.936102 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.955450 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.960452 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dd74c040-9e89-4c40-8e16-5ae6c0f6e65f-metrics-tls\") pod \"ingress-operator-5b745b69d9-tvv95\" (UID: \"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.967997 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.968257 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.468229026 +0000 UTC m=+141.757479293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.968905 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:57 crc kubenswrapper[4757]: E0129 15:12:57.969345 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.469318059 +0000 UTC m=+141.758568376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.982730 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.990860 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd74c040-9e89-4c40-8e16-5ae6c0f6e65f-trusted-ca\") pod \"ingress-operator-5b745b69d9-tvv95\" (UID: \"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:57 crc kubenswrapper[4757]: I0129 15:12:57.995338 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.015237 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.023179 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b10bc118-1493-4055-a8c2-1a1b9aca7c91-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qzc6g\" (UID: \"b10bc118-1493-4055-a8c2-1a1b9aca7c91\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.036953 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.055339 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.058062 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9dd1f071-c13e-42a1-80bd-81d4121b0cdc-images\") pod \"machine-config-operator-74547568cd-lhw6r\" (UID: \"9dd1f071-c13e-42a1-80bd-81d4121b0cdc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.070735 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.070943 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.570911258 +0000 UTC m=+141.860161495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.071116 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.071496 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.571483175 +0000 UTC m=+141.860733412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.075226 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.095926 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.115160 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.128361 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9dd1f071-c13e-42a1-80bd-81d4121b0cdc-proxy-tls\") pod \"machine-config-operator-74547568cd-lhw6r\" (UID: \"9dd1f071-c13e-42a1-80bd-81d4121b0cdc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.134933 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.138067 4757 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.138204 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-stats-auth podName:a5122101-998b-48d5-ae6e-c4746b2ba055 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.638184299 +0000 UTC m=+141.927434536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-stats-auth") pod "router-default-5444994796-h9rvk" (UID: "a5122101-998b-48d5-ae6e-c4746b2ba055") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.141333 4757 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.141383 4757 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.141333 4757 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.141359 4757 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.141473 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c8548b94-9099-42d5-914d-c2c10561bc5a-config-volume podName:c8548b94-9099-42d5-914d-c2c10561bc5a nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.641409257 +0000 UTC m=+141.930659494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c8548b94-9099-42d5-914d-c2c10561bc5a-config-volume") pod "collect-profiles-29494980-9zrww" (UID: "c8548b94-9099-42d5-914d-c2c10561bc5a") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.141493 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e7cba3a-da69-495d-8f3c-286a75ca8e48-config-volume podName:1e7cba3a-da69-495d-8f3c-286a75ca8e48 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.641485149 +0000 UTC m=+141.930735376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1e7cba3a-da69-495d-8f3c-286a75ca8e48-config-volume") pod "dns-default-m9f2c" (UID: "1e7cba3a-da69-495d-8f3c-286a75ca8e48") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.141524 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e7cba3a-da69-495d-8f3c-286a75ca8e48-metrics-tls podName:1e7cba3a-da69-495d-8f3c-286a75ca8e48 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.6414994 +0000 UTC m=+141.930749637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/1e7cba3a-da69-495d-8f3c-286a75ca8e48-metrics-tls") pod "dns-default-m9f2c" (UID: "1e7cba3a-da69-495d-8f3c-286a75ca8e48") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.141537 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6be95c99-c279-4066-a0c6-b1499d8f7e07-srv-cert podName:6be95c99-c279-4066-a0c6-b1499d8f7e07 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.641532851 +0000 UTC m=+141.930783088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/6be95c99-c279-4066-a0c6-b1499d8f7e07-srv-cert") pod "catalog-operator-68c6474976-n44qs" (UID: "6be95c99-c279-4066-a0c6-b1499d8f7e07") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.142016 4757 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.142059 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6be95c99-c279-4066-a0c6-b1499d8f7e07-profile-collector-cert podName:6be95c99-c279-4066-a0c6-b1499d8f7e07 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.642050916 +0000 UTC m=+141.931301143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/6be95c99-c279-4066-a0c6-b1499d8f7e07-profile-collector-cert") pod "catalog-operator-68c6474976-n44qs" (UID: "6be95c99-c279-4066-a0c6-b1499d8f7e07") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.142882 4757 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.143018 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08b2186f-939e-4005-9fd9-1f1cc7b087d8-proxy-tls podName:08b2186f-939e-4005-9fd9-1f1cc7b087d8 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.642962114 +0000 UTC m=+141.932212431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/08b2186f-939e-4005-9fd9-1f1cc7b087d8-proxy-tls") pod "machine-config-controller-84d6567774-nkstc" (UID: "08b2186f-939e-4005-9fd9-1f1cc7b087d8") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.144165 4757 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.144225 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a5122101-998b-48d5-ae6e-c4746b2ba055-service-ca-bundle podName:a5122101-998b-48d5-ae6e-c4746b2ba055 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.644212582 +0000 UTC m=+141.933462899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/a5122101-998b-48d5-ae6e-c4746b2ba055-service-ca-bundle") pod "router-default-5444994796-h9rvk" (UID: "a5122101-998b-48d5-ae6e-c4746b2ba055") : failed to sync configmap cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.144175 4757 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.144327 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7add9ebb-c4ec-4eed-affb-bdd76b207c29-profile-collector-cert podName:7add9ebb-c4ec-4eed-affb-bdd76b207c29 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.644316785 +0000 UTC m=+141.933567102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/7add9ebb-c4ec-4eed-affb-bdd76b207c29-profile-collector-cert") pod "olm-operator-6b444d44fb-dz9cf" (UID: "7add9ebb-c4ec-4eed-affb-bdd76b207c29") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.145605 4757 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.145650 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8548b94-9099-42d5-914d-c2c10561bc5a-secret-volume podName:c8548b94-9099-42d5-914d-c2c10561bc5a nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.645641195 +0000 UTC m=+141.934891432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-volume" (UniqueName: "kubernetes.io/secret/c8548b94-9099-42d5-914d-c2c10561bc5a-secret-volume") pod "collect-profiles-29494980-9zrww" (UID: "c8548b94-9099-42d5-914d-c2c10561bc5a") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.145676 4757 secret.go:188] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.145704 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-default-certificate podName:a5122101-998b-48d5-ae6e-c4746b2ba055 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.645695426 +0000 UTC m=+141.934945733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-default-certificate") pod "router-default-5444994796-h9rvk" (UID: "a5122101-998b-48d5-ae6e-c4746b2ba055") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.147858 4757 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.147941 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-metrics-certs podName:a5122101-998b-48d5-ae6e-c4746b2ba055 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.647926874 +0000 UTC m=+141.937177111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-metrics-certs") pod "router-default-5444994796-h9rvk" (UID: "a5122101-998b-48d5-ae6e-c4746b2ba055") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.148196 4757 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.148302 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7add9ebb-c4ec-4eed-affb-bdd76b207c29-srv-cert podName:7add9ebb-c4ec-4eed-affb-bdd76b207c29 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.648275604 +0000 UTC m=+141.937525831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7add9ebb-c4ec-4eed-affb-bdd76b207c29-srv-cert") pod "olm-operator-6b444d44fb-dz9cf" (UID: "7add9ebb-c4ec-4eed-affb-bdd76b207c29") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.150522 4757 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.150610 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ace4ebfd-1a19-4556-a22e-d9cc9ce6d143-cert podName:ace4ebfd-1a19-4556-a22e-d9cc9ce6d143 nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.650571894 +0000 UTC m=+141.939822201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ace4ebfd-1a19-4556-a22e-d9cc9ce6d143-cert") pod "ingress-canary-xfk54" (UID: "ace4ebfd-1a19-4556-a22e-d9cc9ce6d143") : failed to sync secret cache: timed out waiting for the condition Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.155813 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.172074 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.172179 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.672167176 +0000 UTC m=+141.961417413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.172485 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.172775 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.672766454 +0000 UTC m=+141.962016691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.174450 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.195923 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.215530 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.235833 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.255411 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.274243 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.274406 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.774382673 +0000 UTC m=+142.063632900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.275031 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.275648 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.275687 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.775672462 +0000 UTC m=+142.064922759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.295241 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.315673 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.335284 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.368892 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84fvf\" (UniqueName: \"kubernetes.io/projected/dacc418b-f809-4317-9526-08c5781c6f68-kube-api-access-84fvf\") pod \"route-controller-manager-6576b87f9c-pf59m\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.376807 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.377090 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.877072815 +0000 UTC m=+142.166323052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.378241 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.378584 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.87857087 +0000 UTC m=+142.167821107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.389053 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjjb9\" (UniqueName: \"kubernetes.io/projected/bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a-kube-api-access-gjjb9\") pod \"apiserver-7bbb656c7d-zzvjx\" (UID: \"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.394884 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.399351 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.413886 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.416368 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.451365 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbp9x\" (UniqueName: \"kubernetes.io/projected/1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f-kube-api-access-mbp9x\") pod \"console-operator-58897d9998-jkgsj\" (UID: \"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f\") " pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.455555 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.475823 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.479432 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.479618 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.979598441 +0000 UTC m=+142.268848688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.480094 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.480577 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:58.98055264 +0000 UTC m=+142.269802877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.496523 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.516155 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.536043 4757 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.555496 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.581551 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.582489 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:59.082471088 +0000 UTC m=+142.371721335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.592984 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdbjh\" (UniqueName: \"kubernetes.io/projected/bab27dde-a537-445c-8d39-ad7479b66bcb-kube-api-access-rdbjh\") pod \"machine-api-operator-5694c8668f-z9qzn\" (UID: \"bab27dde-a537-445c-8d39-ad7479b66bcb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.596917 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.620890 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrmz9\" (UniqueName: \"kubernetes.io/projected/ea61811e-2455-4157-a3f3-1376f4a11e8c-kube-api-access-mrmz9\") pod \"cluster-samples-operator-665b6dd947-fvpbt\" (UID: \"ea61811e-2455-4157-a3f3-1376f4a11e8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.635527 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjp2x\" (UniqueName: \"kubernetes.io/projected/ad7f4116-0c15-4b08-9edc-bacd65170a95-kube-api-access-mjp2x\") pod \"authentication-operator-69f744f599-m6jnj\" (UID: \"ad7f4116-0c15-4b08-9edc-bacd65170a95\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.655531 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.658175 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn87t\" (UniqueName: \"kubernetes.io/projected/c20ebd50-0f39-4321-84c3-1806672c78c0-kube-api-access-bn87t\") pod \"machine-approver-56656f9798-2g78z\" (UID: \"c20ebd50-0f39-4321-84c3-1806672c78c0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.676604 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.683887 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-stats-auth\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.683953 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6be95c99-c279-4066-a0c6-b1499d8f7e07-srv-cert\") pod \"catalog-operator-68c6474976-n44qs\" (UID: \"6be95c99-c279-4066-a0c6-b1499d8f7e07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.683979 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e7cba3a-da69-495d-8f3c-286a75ca8e48-config-volume\") pod \"dns-default-m9f2c\" (UID: \"1e7cba3a-da69-495d-8f3c-286a75ca8e48\") " pod="openshift-dns/dns-default-m9f2c" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.683999 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/08b2186f-939e-4005-9fd9-1f1cc7b087d8-proxy-tls\") pod \"machine-config-controller-84d6567774-nkstc\" (UID: \"08b2186f-939e-4005-9fd9-1f1cc7b087d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.684022 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1e7cba3a-da69-495d-8f3c-286a75ca8e48-metrics-tls\") pod \"dns-default-m9f2c\" (UID: \"1e7cba3a-da69-495d-8f3c-286a75ca8e48\") " pod="openshift-dns/dns-default-m9f2c" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.684040 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.684063 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7add9ebb-c4ec-4eed-affb-bdd76b207c29-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dz9cf\" (UID: \"7add9ebb-c4ec-4eed-affb-bdd76b207c29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.684099 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5122101-998b-48d5-ae6e-c4746b2ba055-service-ca-bundle\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.684136 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8548b94-9099-42d5-914d-c2c10561bc5a-config-volume\") pod \"collect-profiles-29494980-9zrww\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.684153 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6be95c99-c279-4066-a0c6-b1499d8f7e07-profile-collector-cert\") pod \"catalog-operator-68c6474976-n44qs\" (UID: \"6be95c99-c279-4066-a0c6-b1499d8f7e07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.684195 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8548b94-9099-42d5-914d-c2c10561bc5a-secret-volume\") pod \"collect-profiles-29494980-9zrww\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.684253 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-default-certificate\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.684324 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-metrics-certs\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.684341 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7add9ebb-c4ec-4eed-affb-bdd76b207c29-srv-cert\") pod \"olm-operator-6b444d44fb-dz9cf\" (UID: \"7add9ebb-c4ec-4eed-affb-bdd76b207c29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.684793 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ace4ebfd-1a19-4556-a22e-d9cc9ce6d143-cert\") pod \"ingress-canary-xfk54\" (UID: \"ace4ebfd-1a19-4556-a22e-d9cc9ce6d143\") " pod="openshift-ingress-canary/ingress-canary-xfk54" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.685647 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5122101-998b-48d5-ae6e-c4746b2ba055-service-ca-bundle\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.685647 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8548b94-9099-42d5-914d-c2c10561bc5a-config-volume\") pod \"collect-profiles-29494980-9zrww\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.686960 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-stats-auth\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.687558 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-metrics-certs\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.687998 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:59.187964274 +0000 UTC m=+142.477214621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.688226 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7add9ebb-c4ec-4eed-affb-bdd76b207c29-profile-collector-cert\") pod \"olm-operator-6b444d44fb-dz9cf\" (UID: \"7add9ebb-c4ec-4eed-affb-bdd76b207c29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.688397 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a5122101-998b-48d5-ae6e-c4746b2ba055-default-certificate\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.689145 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/08b2186f-939e-4005-9fd9-1f1cc7b087d8-proxy-tls\") pod \"machine-config-controller-84d6567774-nkstc\" (UID: \"08b2186f-939e-4005-9fd9-1f1cc7b087d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.689201 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8548b94-9099-42d5-914d-c2c10561bc5a-secret-volume\") pod \"collect-profiles-29494980-9zrww\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.690230 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7add9ebb-c4ec-4eed-affb-bdd76b207c29-srv-cert\") pod \"olm-operator-6b444d44fb-dz9cf\" (UID: \"7add9ebb-c4ec-4eed-affb-bdd76b207c29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.695845 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.713989 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ace4ebfd-1a19-4556-a22e-d9cc9ce6d143-cert\") pod \"ingress-canary-xfk54\" (UID: \"ace4ebfd-1a19-4556-a22e-d9cc9ce6d143\") " pod="openshift-ingress-canary/ingress-canary-xfk54" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.715465 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.732757 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.737633 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.743060 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m"] Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.744652 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e7cba3a-da69-495d-8f3c-286a75ca8e48-config-volume\") pod \"dns-default-m9f2c\" (UID: \"1e7cba3a-da69-495d-8f3c-286a75ca8e48\") " pod="openshift-dns/dns-default-m9f2c" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.746564 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx"] Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.756105 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 15:12:58 crc kubenswrapper[4757]: W0129 15:12:58.756703 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddacc418b_f809_4317_9526_08c5781c6f68.slice/crio-83e8897cb2c923640d6c9b2f2923cd88c5675980240b5173cc2fb05dd69d2d6a WatchSource:0}: Error finding container 83e8897cb2c923640d6c9b2f2923cd88c5675980240b5173cc2fb05dd69d2d6a: Status 404 returned error can't find the container with id 83e8897cb2c923640d6c9b2f2923cd88c5675980240b5173cc2fb05dd69d2d6a Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.756754 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z9qzn"] Jan 29 15:12:58 crc kubenswrapper[4757]: W0129 15:12:58.768037 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbab27dde_a537_445c_8d39_ad7479b66bcb.slice/crio-db9dbeb72812c9d64e162d327615b5f30fdf7b68b067a59e934acbdd27f4d992 WatchSource:0}: Error finding container db9dbeb72812c9d64e162d327615b5f30fdf7b68b067a59e934acbdd27f4d992: Status 404 returned error can't find the container with id db9dbeb72812c9d64e162d327615b5f30fdf7b68b067a59e934acbdd27f4d992 Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.775976 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.780225 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1e7cba3a-da69-495d-8f3c-286a75ca8e48-metrics-tls\") pod \"dns-default-m9f2c\" (UID: \"1e7cba3a-da69-495d-8f3c-286a75ca8e48\") " pod="openshift-dns/dns-default-m9f2c" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.785284 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.786378 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:59.286356356 +0000 UTC m=+142.575606593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.794390 4757 request.go:700] Waited for 1.775016171s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.809106 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.813863 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr74n\" (UniqueName: \"kubernetes.io/projected/9e9103bc-a2bb-4075-8454-c6f0af5c2c29-kube-api-access-wr74n\") pod \"openshift-config-operator-7777fb866f-hghqd\" (UID: \"9e9103bc-a2bb-4075-8454-c6f0af5c2c29\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.817588 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.819692 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6be95c99-c279-4066-a0c6-b1499d8f7e07-srv-cert\") pod \"catalog-operator-68c6474976-n44qs\" (UID: \"6be95c99-c279-4066-a0c6-b1499d8f7e07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.820021 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6be95c99-c279-4066-a0c6-b1499d8f7e07-profile-collector-cert\") pod \"catalog-operator-68c6474976-n44qs\" (UID: \"6be95c99-c279-4066-a0c6-b1499d8f7e07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.835063 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.841531 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clvwv\" (UniqueName: \"kubernetes.io/projected/42aab7ad-1293-4b39-8199-0b7f944a8f31-kube-api-access-clvwv\") pod \"controller-manager-879f6c89f-8tbgk\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.848815 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2bx5\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-kube-api-access-k2bx5\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.849410 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.868883 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-bound-sa-token\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.884428 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.890451 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.890759 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:59.390745389 +0000 UTC m=+142.679995636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.897475 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f404652-4bd9-4720-b625-01ae3c2d29fa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jx6g5\" (UID: \"1f404652-4bd9-4720-b625-01ae3c2d29fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.911257 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqjsp\" (UniqueName: \"kubernetes.io/projected/1e93cae3-c9b6-493c-a8cc-c09cc83b0dca-kube-api-access-hqjsp\") pod \"kube-storage-version-migrator-operator-b67b599dd-pw8nl\" (UID: \"1e93cae3-c9b6-493c-a8cc-c09cc83b0dca\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.919439 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jkgsj"] Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.928336 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8351b0cf-f243-4fe3-ba94-30f3ee17320e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2x58v\" (UID: \"8351b0cf-f243-4fe3-ba94-30f3ee17320e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.979321 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbj7w\" (UniqueName: \"kubernetes.io/projected/69d5d601-aa72-4044-ad14-81c12a34c8f0-kube-api-access-hbj7w\") pod \"service-ca-9c57cc56f-m7q76\" (UID: \"69d5d601-aa72-4044-ad14-81c12a34c8f0\") " pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.991853 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.991906 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntvkh\" (UniqueName: \"kubernetes.io/projected/da973e95-27c1-4f17-87e4-79bf0bc0e0fe-kube-api-access-ntvkh\") pod \"multus-admission-controller-857f4d67dd-trnpt\" (UID: \"da973e95-27c1-4f17-87e4-79bf0bc0e0fe\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-trnpt" Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.992094 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:59.492074649 +0000 UTC m=+142.781324956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:58 crc kubenswrapper[4757]: I0129 15:12:58.992309 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:58 crc kubenswrapper[4757]: E0129 15:12:58.992660 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:59.492643827 +0000 UTC m=+142.781894104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.013933 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9t8k\" (UniqueName: \"kubernetes.io/projected/616e840f-aaeb-48cc-b979-f690d54a8c95-kube-api-access-k9t8k\") pod \"dns-operator-744455d44c-pvt9r\" (UID: \"616e840f-aaeb-48cc-b979-f690d54a8c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-pvt9r" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.034092 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnbdj\" (UniqueName: \"kubernetes.io/projected/fa387e7d-5a82-4577-bbe3-ea5aeb17adc2-kube-api-access-vnbdj\") pod \"csi-hostpathplugin-wsz9t\" (UID: \"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2\") " pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.044504 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pvt9r" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.051954 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.060589 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kqpw\" (UniqueName: \"kubernetes.io/projected/905f21b5-42ca-4558-b66c-b957fd41c9e8-kube-api-access-8kqpw\") pod \"packageserver-d55dfcdfc-wnhtd\" (UID: \"905f21b5-42ca-4558-b66c-b957fd41c9e8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.064654 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-trnpt" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.076311 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.081807 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr5xt\" (UniqueName: \"kubernetes.io/projected/589345a6-68e3-4e06-bf66-b30c3457f59c-kube-api-access-pr5xt\") pod \"cluster-image-registry-operator-dc59b4c8b-x9p7l\" (UID: \"589345a6-68e3-4e06-bf66-b30c3457f59c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.087447 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.093616 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:59 crc kubenswrapper[4757]: E0129 15:12:59.093784 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:59.59374637 +0000 UTC m=+142.882996607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.094309 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:59 crc kubenswrapper[4757]: E0129 15:12:59.094788 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:59.594774681 +0000 UTC m=+142.884024918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.099972 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf8gx\" (UniqueName: \"kubernetes.io/projected/08b2186f-939e-4005-9fd9-1f1cc7b087d8-kube-api-access-jf8gx\") pod \"machine-config-controller-84d6567774-nkstc\" (UID: \"08b2186f-939e-4005-9fd9-1f1cc7b087d8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.115766 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" event={"ID":"c20ebd50-0f39-4321-84c3-1806672c78c0","Type":"ContainerStarted","Data":"dc8ff79327c77a589a1c00149f98b20690e3f2e7e0ec1cd9a53de0e16e9d3dda"} Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.125081 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sczqk\" (UniqueName: \"kubernetes.io/projected/b10bc118-1493-4055-a8c2-1a1b9aca7c91-kube-api-access-sczqk\") pod \"control-plane-machine-set-operator-78cbb6b69f-qzc6g\" (UID: \"b10bc118-1493-4055-a8c2-1a1b9aca7c91\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.125982 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" event={"ID":"dacc418b-f809-4317-9526-08c5781c6f68","Type":"ContainerStarted","Data":"83e8897cb2c923640d6c9b2f2923cd88c5675980240b5173cc2fb05dd69d2d6a"} Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.126651 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" event={"ID":"bab27dde-a537-445c-8d39-ad7479b66bcb","Type":"ContainerStarted","Data":"db9dbeb72812c9d64e162d327615b5f30fdf7b68b067a59e934acbdd27f4d992"} Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.127167 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jkgsj" event={"ID":"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f","Type":"ContainerStarted","Data":"6ceada7187164b265ccb5758ec6cdd4b007630a4747dc3e2d7ca9cb8abf4134f"} Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.127699 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" event={"ID":"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a","Type":"ContainerStarted","Data":"cc1756899be6b14e144683e7e87f6c4b414cde3229abc85d76bd9757f318b8ec"} Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.130542 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf49j\" (UniqueName: \"kubernetes.io/projected/dd74c040-9e89-4c40-8e16-5ae6c0f6e65f-kube-api-access-rf49j\") pod \"ingress-operator-5b745b69d9-tvv95\" (UID: \"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.133716 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.137107 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt"] Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.148545 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb2nv\" (UniqueName: \"kubernetes.io/projected/6be95c99-c279-4066-a0c6-b1499d8f7e07-kube-api-access-kb2nv\") pod \"catalog-operator-68c6474976-n44qs\" (UID: \"6be95c99-c279-4066-a0c6-b1499d8f7e07\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.171198 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd74c040-9e89-4c40-8e16-5ae6c0f6e65f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tvv95\" (UID: \"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.177822 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.189848 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krf9j\" (UniqueName: \"kubernetes.io/projected/f10cf2ea-d11c-422e-9f8e-b93d422df097-kube-api-access-krf9j\") pod \"package-server-manager-789f6589d5-ssg7r\" (UID: \"f10cf2ea-d11c-422e-9f8e-b93d422df097\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.191324 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.196507 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bp6c\" (UniqueName: \"kubernetes.io/projected/9dd1f071-c13e-42a1-80bd-81d4121b0cdc-kube-api-access-6bp6c\") pod \"machine-config-operator-74547568cd-lhw6r\" (UID: \"9dd1f071-c13e-42a1-80bd-81d4121b0cdc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.197796 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:59 crc kubenswrapper[4757]: E0129 15:12:59.198391 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:59.698351429 +0000 UTC m=+142.987601666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.200598 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.210990 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0379bce9-e0c6-4283-8fb4-fcf300dc30bf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-plpn9\" (UID: \"0379bce9-e0c6-4283-8fb4-fcf300dc30bf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.212873 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.233311 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dngr\" (UniqueName: \"kubernetes.io/projected/7add9ebb-c4ec-4eed-affb-bdd76b207c29-kube-api-access-6dngr\") pod \"olm-operator-6b444d44fb-dz9cf\" (UID: \"7add9ebb-c4ec-4eed-affb-bdd76b207c29\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.268189 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntmc8\" (UniqueName: \"kubernetes.io/projected/a5122101-998b-48d5-ae6e-c4746b2ba055-kube-api-access-ntmc8\") pod \"router-default-5444994796-h9rvk\" (UID: \"a5122101-998b-48d5-ae6e-c4746b2ba055\") " pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.268467 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.270452 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfckq\" (UniqueName: \"kubernetes.io/projected/8119cbd7-40a4-4875-b49c-1e982ec9acd8-kube-api-access-sfckq\") pod \"service-ca-operator-777779d784-rpcpn\" (UID: \"8119cbd7-40a4-4875-b49c-1e982ec9acd8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.276546 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.289166 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmsm6\" (UniqueName: \"kubernetes.io/projected/bcb581d1-4a29-4bf3-9df9-1669cf88e9f3-kube-api-access-nmsm6\") pod \"openshift-apiserver-operator-796bbdcf4f-h7pw2\" (UID: \"bcb581d1-4a29-4bf3-9df9-1669cf88e9f3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.297365 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.300079 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:59 crc kubenswrapper[4757]: E0129 15:12:59.300453 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:12:59.800439153 +0000 UTC m=+143.089689390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.310614 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcp8m\" (UniqueName: \"kubernetes.io/projected/a0f71154-b1ff-4e61-9c93-8bcb95678bce-kube-api-access-kcp8m\") pod \"console-f9d7485db-skxmw\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.328995 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrcsn\" (UniqueName: \"kubernetes.io/projected/0b0330c1-19bb-492e-815a-2827e5749d68-kube-api-access-lrcsn\") pod \"apiserver-76f77b778f-zrp48\" (UID: \"0b0330c1-19bb-492e-815a-2827e5749d68\") " pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.334738 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.381978 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5ljf\" (UniqueName: \"kubernetes.io/projected/818a92e0-3e21-4f17-8950-a74066570368-kube-api-access-f5ljf\") pod \"openshift-controller-manager-operator-756b6f6bc6-bkltp\" (UID: \"818a92e0-3e21-4f17-8950-a74066570368\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.389369 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk4ds\" (UniqueName: \"kubernetes.io/projected/ca0b207e-f487-4256-b01b-47aecb6921b6-kube-api-access-vk4ds\") pod \"machine-config-server-mfd9g\" (UID: \"ca0b207e-f487-4256-b01b-47aecb6921b6\") " pod="openshift-machine-config-operator/machine-config-server-mfd9g" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.395560 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.400992 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:59 crc kubenswrapper[4757]: E0129 15:12:59.401382 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:12:59.901366271 +0000 UTC m=+143.190616508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.429986 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmsld\" (UniqueName: \"kubernetes.io/projected/e9d54611-82e4-4698-b654-62a1d7144225-kube-api-access-zmsld\") pod \"oauth-openshift-558db77b4-mg555\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.445549 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.451540 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.453185 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlr7r\" (UniqueName: \"kubernetes.io/projected/d68b032e-f86c-4928-a676-03c9e49c6722-kube-api-access-nlr7r\") pod \"marketplace-operator-79b997595-grbn4\" (UID: \"d68b032e-f86c-4928-a676-03c9e49c6722\") " pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.458113 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.466726 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mfd9g" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.473477 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24h4l\" (UniqueName: \"kubernetes.io/projected/88566fd4-0a9f-42dd-a6d5-989dc7176aea-kube-api-access-24h4l\") pod \"migrator-59844c95c7-dkjl5\" (UID: \"88566fd4-0a9f-42dd-a6d5-989dc7176aea\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dkjl5" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.476859 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brpr8\" (UniqueName: \"kubernetes.io/projected/1e7cba3a-da69-495d-8f3c-286a75ca8e48-kube-api-access-brpr8\") pod \"dns-default-m9f2c\" (UID: \"1e7cba3a-da69-495d-8f3c-286a75ca8e48\") " pod="openshift-dns/dns-default-m9f2c" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.488672 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfgxh\" (UniqueName: \"kubernetes.io/projected/cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8-kube-api-access-bfgxh\") pod \"etcd-operator-b45778765-k5nbp\" (UID: \"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.490588 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2f42\" (UniqueName: \"kubernetes.io/projected/c8548b94-9099-42d5-914d-c2c10561bc5a-kube-api-access-j2f42\") pod \"collect-profiles-29494980-9zrww\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.502295 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:59 crc kubenswrapper[4757]: E0129 15:12:59.502975 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:00.00296078 +0000 UTC m=+143.292211017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.520797 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/589345a6-68e3-4e06-bf66-b30c3457f59c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x9p7l\" (UID: \"589345a6-68e3-4e06-bf66-b30c3457f59c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.526252 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dkjl5" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.531984 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6vlw\" (UniqueName: \"kubernetes.io/projected/3e6ceaed-34b1-4c4f-abe3-96756d34e30f-kube-api-access-f6vlw\") pod \"downloads-7954f5f757-gs77j\" (UID: \"3e6ceaed-34b1-4c4f-abe3-96756d34e30f\") " pod="openshift-console/downloads-7954f5f757-gs77j" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.532254 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.548160 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.554188 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwbsq\" (UniqueName: \"kubernetes.io/projected/ace4ebfd-1a19-4556-a22e-d9cc9ce6d143-kube-api-access-hwbsq\") pod \"ingress-canary-xfk54\" (UID: \"ace4ebfd-1a19-4556-a22e-d9cc9ce6d143\") " pod="openshift-ingress-canary/ingress-canary-xfk54" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.558143 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.564612 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.572342 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8tbgk"] Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.580524 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gs77j" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.587329 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.596505 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.608477 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.609076 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xfk54" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.609549 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:59 crc kubenswrapper[4757]: E0129 15:12:59.609737 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:00.109708384 +0000 UTC m=+143.398958621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.610067 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:59 crc kubenswrapper[4757]: E0129 15:12:59.610463 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:00.110444106 +0000 UTC m=+143.399694343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.611941 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-m9f2c" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.656770 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.707689 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.712746 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:59 crc kubenswrapper[4757]: E0129 15:12:59.712857 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:00.212839787 +0000 UTC m=+143.502090024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.714123 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:59 crc kubenswrapper[4757]: E0129 15:12:59.714658 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:00.214645852 +0000 UTC m=+143.503896079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.727060 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hghqd"] Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.735235 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-m6jnj"] Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.763381 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g"] Jan 29 15:12:59 crc kubenswrapper[4757]: W0129 15:12:59.791168 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42aab7ad_1293_4b39_8199_0b7f944a8f31.slice/crio-414a35efbacc66d2da9e2c69276a7d50806840d711f8995f95f18ea1e63a5200 WatchSource:0}: Error finding container 414a35efbacc66d2da9e2c69276a7d50806840d711f8995f95f18ea1e63a5200: Status 404 returned error can't find the container with id 414a35efbacc66d2da9e2c69276a7d50806840d711f8995f95f18ea1e63a5200 Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.802064 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r"] Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.815585 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:12:59 crc kubenswrapper[4757]: E0129 15:12:59.815998 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:00.315949512 +0000 UTC m=+143.605199749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:59 crc kubenswrapper[4757]: I0129 15:12:59.921304 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:12:59 crc kubenswrapper[4757]: E0129 15:12:59.921691 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:00.421660884 +0000 UTC m=+143.710911121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:12:59 crc kubenswrapper[4757]: W0129 15:12:59.958325 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dd1f071_c13e_42a1_80bd_81d4121b0cdc.slice/crio-a4202d17f95e7cf2610fa379ecab845f239ed185d5c06bbdd51b3406d0c14177 WatchSource:0}: Error finding container a4202d17f95e7cf2610fa379ecab845f239ed185d5c06bbdd51b3406d0c14177: Status 404 returned error can't find the container with id a4202d17f95e7cf2610fa379ecab845f239ed185d5c06bbdd51b3406d0c14177 Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.021887 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:00 crc kubenswrapper[4757]: E0129 15:13:00.022562 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:00.522516191 +0000 UTC m=+143.811766428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.068834 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95"] Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.123565 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:00 crc kubenswrapper[4757]: E0129 15:13:00.124036 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:00.624001266 +0000 UTC m=+143.913251503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.169417 4757 generic.go:334] "Generic (PLEG): container finished" podID="bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a" containerID="4d2711761d997372226016c596eb36ed50a9d7f99d96ac608a26c2d3c669697b" exitCode=0 Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.169533 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" event={"ID":"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a","Type":"ContainerDied","Data":"4d2711761d997372226016c596eb36ed50a9d7f99d96ac608a26c2d3c669697b"} Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.226480 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:00 crc kubenswrapper[4757]: E0129 15:13:00.228041 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:00.728018467 +0000 UTC m=+144.017268704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.240305 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" event={"ID":"ad7f4116-0c15-4b08-9edc-bacd65170a95","Type":"ContainerStarted","Data":"e2103168dfa3e37fd898aeea3cb15203428cba358a0953bcb91a9ce73cdb9781"} Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.248337 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5"] Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.259886 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-trnpt"] Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.276039 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt" event={"ID":"ea61811e-2455-4157-a3f3-1376f4a11e8c","Type":"ContainerStarted","Data":"7e3acdc2f4ff7b254d5ac933e4314a5524f84565efc45b5bc68d571208130e90"} Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.294129 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" event={"ID":"42aab7ad-1293-4b39-8199-0b7f944a8f31","Type":"ContainerStarted","Data":"414a35efbacc66d2da9e2c69276a7d50806840d711f8995f95f18ea1e63a5200"} Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.296372 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" event={"ID":"9e9103bc-a2bb-4075-8454-c6f0af5c2c29","Type":"ContainerStarted","Data":"16f0f0c2d01e6319a69a665f7d2baede8698cd1a5c0d15134c2be177d781fc3a"} Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.328018 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" event={"ID":"dacc418b-f809-4317-9526-08c5781c6f68","Type":"ContainerStarted","Data":"782c17b0ca95e95c1dfbc7c966fba7678ba47041f0793d682790c816c8351bde"} Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.328062 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.329804 4757 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-pf59m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.329876 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" podUID="dacc418b-f809-4317-9526-08c5781c6f68" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.335019 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:00 crc kubenswrapper[4757]: E0129 15:13:00.336530 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:00.836513954 +0000 UTC m=+144.125764181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.341671 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" event={"ID":"bab27dde-a537-445c-8d39-ad7479b66bcb","Type":"ContainerStarted","Data":"c8d0ae3535e5538b2c9c14bf2966c5542c88caf89b21de36162519d38a844326"} Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.351380 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd"] Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.351539 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" event={"ID":"9dd1f071-c13e-42a1-80bd-81d4121b0cdc","Type":"ContainerStarted","Data":"a4202d17f95e7cf2610fa379ecab845f239ed185d5c06bbdd51b3406d0c14177"} Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.360875 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g" event={"ID":"b10bc118-1493-4055-a8c2-1a1b9aca7c91","Type":"ContainerStarted","Data":"0fb170c8c6f2d58698e1c0e159e7e6a9d55730661fc2190334f6212fe4cb4b3a"} Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.388804 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mfd9g" event={"ID":"ca0b207e-f487-4256-b01b-47aecb6921b6","Type":"ContainerStarted","Data":"e3a33dfa0fa1574d5da6b5c1c66719f14bc40bf6311fc0c90819daec228211c5"} Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.438793 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:00 crc kubenswrapper[4757]: E0129 15:13:00.440073 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:00.940052991 +0000 UTC m=+144.229303228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.478339 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc"] Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.507665 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-m7q76"] Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.540605 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:00 crc kubenswrapper[4757]: E0129 15:13:00.541039 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:01.041023651 +0000 UTC m=+144.330273888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.546327 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl"] Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.557513 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pvt9r"] Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.577644 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" podStartSLOduration=120.577619886 podStartE2EDuration="2m0.577619886s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:00.57409707 +0000 UTC m=+143.863347327" watchObservedRunningTime="2026-01-29 15:13:00.577619886 +0000 UTC m=+143.866870133" Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.641563 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:00 crc kubenswrapper[4757]: E0129 15:13:00.642022 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:01.142006441 +0000 UTC m=+144.431256678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:00 crc kubenswrapper[4757]: W0129 15:13:00.681996 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod616e840f_aaeb_48cc_b979_f690d54a8c95.slice/crio-d055a704ef0f51544d2539602932bb7a08766cce6d98b0cee163af8bb03194e5 WatchSource:0}: Error finding container d055a704ef0f51544d2539602932bb7a08766cce6d98b0cee163af8bb03194e5: Status 404 returned error can't find the container with id d055a704ef0f51544d2539602932bb7a08766cce6d98b0cee163af8bb03194e5 Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.699832 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-skxmw"] Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.743241 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:00 crc kubenswrapper[4757]: E0129 15:13:00.744547 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:01.244527657 +0000 UTC m=+144.533777894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.808943 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2"] Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.844792 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:00 crc kubenswrapper[4757]: E0129 15:13:00.847093 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:01.347062224 +0000 UTC m=+144.636312471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.916527 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wsz9t"] Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.950705 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:00 crc kubenswrapper[4757]: E0129 15:13:00.951170 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:01.451154528 +0000 UTC m=+144.740404765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:00 crc kubenswrapper[4757]: I0129 15:13:00.974884 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn"] Jan 29 15:13:00 crc kubenswrapper[4757]: W0129 15:13:00.976430 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa387e7d_5a82_4577_bbe3_ea5aeb17adc2.slice/crio-9419ad72fc5d3cb0b7619a74bee5170920603d8d3a35abca666b5275800c779f WatchSource:0}: Error finding container 9419ad72fc5d3cb0b7619a74bee5170920603d8d3a35abca666b5275800c779f: Status 404 returned error can't find the container with id 9419ad72fc5d3cb0b7619a74bee5170920603d8d3a35abca666b5275800c779f Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.001046 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.023338 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.029149 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.032303 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zrp48"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.052876 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:01 crc kubenswrapper[4757]: E0129 15:13:01.053000 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:01.552975033 +0000 UTC m=+144.842225270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.056455 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:01 crc kubenswrapper[4757]: E0129 15:13:01.056785 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:01.556769638 +0000 UTC m=+144.846019875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:01 crc kubenswrapper[4757]: W0129 15:13:01.092363 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8119cbd7_40a4_4875_b49c_1e982ec9acd8.slice/crio-119d1a7c9c892ff27240bc76a3d786757c2f5f6e061ab575cf2cc8503e14d4e3 WatchSource:0}: Error finding container 119d1a7c9c892ff27240bc76a3d786757c2f5f6e061ab575cf2cc8503e14d4e3: Status 404 returned error can't find the container with id 119d1a7c9c892ff27240bc76a3d786757c2f5f6e061ab575cf2cc8503e14d4e3 Jan 29 15:13:01 crc kubenswrapper[4757]: W0129 15:13:01.111505 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf10cf2ea_d11c_422e_9f8e_b93d422df097.slice/crio-4c39871b6bfd5d12a9283ae20e3e9de935ad254f1739ccd7ae32003f2bbcebd1 WatchSource:0}: Error finding container 4c39871b6bfd5d12a9283ae20e3e9de935ad254f1739ccd7ae32003f2bbcebd1: Status 404 returned error can't find the container with id 4c39871b6bfd5d12a9283ae20e3e9de935ad254f1739ccd7ae32003f2bbcebd1 Jan 29 15:13:01 crc kubenswrapper[4757]: W0129 15:13:01.111722 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8351b0cf_f243_4fe3_ba94_30f3ee17320e.slice/crio-f64afdae807effd92c077a8d6a5b74c0ce093a835a59a7b6215be8fecf38408f WatchSource:0}: Error finding container f64afdae807effd92c077a8d6a5b74c0ce093a835a59a7b6215be8fecf38408f: Status 404 returned error can't find the container with id f64afdae807effd92c077a8d6a5b74c0ce093a835a59a7b6215be8fecf38408f Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.160025 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:01 crc kubenswrapper[4757]: E0129 15:13:01.160162 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:01.6601451 +0000 UTC m=+144.949395337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.160843 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:01 crc kubenswrapper[4757]: E0129 15:13:01.161131 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:01.6611217 +0000 UTC m=+144.950371937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:01 crc kubenswrapper[4757]: W0129 15:13:01.210357 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6be95c99_c279_4066_a0c6_b1499d8f7e07.slice/crio-f57aac82cbdbc5a9e0df4174e0e77587bf8910d133b6267000d5e45bd448b7c5 WatchSource:0}: Error finding container f57aac82cbdbc5a9e0df4174e0e77587bf8910d133b6267000d5e45bd448b7c5: Status 404 returned error can't find the container with id f57aac82cbdbc5a9e0df4174e0e77587bf8910d133b6267000d5e45bd448b7c5 Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.219041 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gs77j"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.227060 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.231701 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.261060 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.261369 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:01 crc kubenswrapper[4757]: E0129 15:13:01.261677 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:01.761665586 +0000 UTC m=+145.050915823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.274209 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.289411 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dkjl5"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.294293 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xfk54"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.362990 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:01 crc kubenswrapper[4757]: E0129 15:13:01.363639 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:01.863624106 +0000 UTC m=+145.152874343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.425671 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" event={"ID":"6be95c99-c279-4066-a0c6-b1499d8f7e07","Type":"ContainerStarted","Data":"f57aac82cbdbc5a9e0df4174e0e77587bf8910d133b6267000d5e45bd448b7c5"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.425736 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" event={"ID":"69d5d601-aa72-4044-ad14-81c12a34c8f0","Type":"ContainerStarted","Data":"5b33b881d4d14e5a71d4a794997a214343b863310d18d6a7952e73269d2cca15"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.425752 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-skxmw" event={"ID":"a0f71154-b1ff-4e61-9c93-8bcb95678bce","Type":"ContainerStarted","Data":"6623043e5d5ee87ab09656f41ca181c2e045e217709097f1a3dab3c981305c89"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.435594 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-m9f2c"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.438904 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" event={"ID":"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2","Type":"ContainerStarted","Data":"9419ad72fc5d3cb0b7619a74bee5170920603d8d3a35abca666b5275800c779f"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.441405 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-k5nbp"] Jan 29 15:13:01 crc kubenswrapper[4757]: W0129 15:13:01.448006 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e6ceaed_34b1_4c4f_abe3_96756d34e30f.slice/crio-4c3e5699df9d988b871fd9545e38a74e2f5aa771b7af33002fd1e39e868ee87e WatchSource:0}: Error finding container 4c3e5699df9d988b871fd9545e38a74e2f5aa771b7af33002fd1e39e868ee87e: Status 404 returned error can't find the container with id 4c3e5699df9d988b871fd9545e38a74e2f5aa771b7af33002fd1e39e868ee87e Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.452829 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" event={"ID":"08b2186f-939e-4005-9fd9-1f1cc7b087d8","Type":"ContainerStarted","Data":"a1e288c573aa06dbd554515570959e40bc8c2b75fcb8737ceb6b7ea40cda66c7"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.460220 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grbn4"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.470002 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:01 crc kubenswrapper[4757]: E0129 15:13:01.470368 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:01.970353219 +0000 UTC m=+145.259603456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.488020 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt" event={"ID":"ea61811e-2455-4157-a3f3-1376f4a11e8c","Type":"ContainerStarted","Data":"0fb23265bdb481bc655ae4eabef4f39cf72358cd339140c5d3a2fab0bc07796c"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.505071 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" event={"ID":"c20ebd50-0f39-4321-84c3-1806672c78c0","Type":"ContainerStarted","Data":"a1dc78f4ea6d0a52a50aea0da3aecf2fac0c807a8eb1ffc2b88e3e1e4e8f1b47"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.507693 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" event={"ID":"8351b0cf-f243-4fe3-ba94-30f3ee17320e","Type":"ContainerStarted","Data":"f64afdae807effd92c077a8d6a5b74c0ce093a835a59a7b6215be8fecf38408f"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.523813 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" event={"ID":"f10cf2ea-d11c-422e-9f8e-b93d422df097","Type":"ContainerStarted","Data":"4c39871b6bfd5d12a9283ae20e3e9de935ad254f1739ccd7ae32003f2bbcebd1"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.530535 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" event={"ID":"8119cbd7-40a4-4875-b49c-1e982ec9acd8","Type":"ContainerStarted","Data":"119d1a7c9c892ff27240bc76a3d786757c2f5f6e061ab575cf2cc8503e14d4e3"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.542632 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g" event={"ID":"b10bc118-1493-4055-a8c2-1a1b9aca7c91","Type":"ContainerStarted","Data":"bce31f38b8940aea18d2c1144491780cdf2e6e87bc445d96dc38178460d9fef5"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.568518 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf"] Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.571116 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.573625 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mg555"] Jan 29 15:13:01 crc kubenswrapper[4757]: E0129 15:13:01.573978 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.073966049 +0000 UTC m=+145.363216286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.574687 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jkgsj" event={"ID":"1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f","Type":"ContainerStarted","Data":"cabe3493dbf35abe4a81a4c69315c4b3c67f6d63360364a4edd59ef7004a1ba7"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.575566 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.577399 4757 patch_prober.go:28] interesting pod/console-operator-58897d9998-jkgsj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.577438 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jkgsj" podUID="1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.581436 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" event={"ID":"905f21b5-42ca-4558-b66c-b957fd41c9e8","Type":"ContainerStarted","Data":"ee14d80800879a1e9467da984b61817d326215f818c7d57af9fffb2c4d18882f"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.589192 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zrp48" event={"ID":"0b0330c1-19bb-492e-815a-2827e5749d68","Type":"ContainerStarted","Data":"94aa6024f54cddb05566a65996bdd64f8d139fd64c653c4dbce0aa7e7db9430d"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.592609 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" event={"ID":"ad7f4116-0c15-4b08-9edc-bacd65170a95","Type":"ContainerStarted","Data":"5db12006c3379accce92dce68172bd7c53b1a777411b717ebe1ac0b0bb23d92e"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.599629 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" event={"ID":"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f","Type":"ContainerStarted","Data":"c5396a3f86f2b6be6676eb890824454265ccc12cbcb4ccbc879a363d507bb7db"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.604610 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" event={"ID":"1f404652-4bd9-4720-b625-01ae3c2d29fa","Type":"ContainerStarted","Data":"9e2c0576b33ba10e5663a36695a9b7e7e2698804f0b4a5481fc02d3e6e561fd3"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.609630 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" event={"ID":"bcb581d1-4a29-4bf3-9df9-1669cf88e9f3","Type":"ContainerStarted","Data":"ac710b79f0162286b305311654652b380d176aeaeffd4700f9317ef212377a97"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.619184 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-trnpt" event={"ID":"da973e95-27c1-4f17-87e4-79bf0bc0e0fe","Type":"ContainerStarted","Data":"1841d7bf359f1abc7838d92c67927265b162e64305327922bbb1c45f3fdcb4fb"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.620801 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pvt9r" event={"ID":"616e840f-aaeb-48cc-b979-f690d54a8c95","Type":"ContainerStarted","Data":"d055a704ef0f51544d2539602932bb7a08766cce6d98b0cee163af8bb03194e5"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.621588 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" event={"ID":"1e93cae3-c9b6-493c-a8cc-c09cc83b0dca","Type":"ContainerStarted","Data":"43b8d929c332b5d1ee91dd21a3620f8ea56016a02e3da7b82de64516df44215b"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.624166 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mfd9g" event={"ID":"ca0b207e-f487-4256-b01b-47aecb6921b6","Type":"ContainerStarted","Data":"95d314e8ab053b191815f21367f3171156584ae32198d5893e6d9500ee0b11e7"} Jan 29 15:13:01 crc kubenswrapper[4757]: W0129 15:13:01.624646 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7add9ebb_c4ec_4eed_affb_bdd76b207c29.slice/crio-b79210df8055fa54b615368c5aab448b6d443201bc65520fa97bf62d42683cd5 WatchSource:0}: Error finding container b79210df8055fa54b615368c5aab448b6d443201bc65520fa97bf62d42683cd5: Status 404 returned error can't find the container with id b79210df8055fa54b615368c5aab448b6d443201bc65520fa97bf62d42683cd5 Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.626591 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" event={"ID":"9e9103bc-a2bb-4075-8454-c6f0af5c2c29","Type":"ContainerStarted","Data":"ee4b738a7027d7a1460dc8c7f6d84e6ae01139aee5b9e3068c6f7bf4715ea1d2"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.629134 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h9rvk" event={"ID":"a5122101-998b-48d5-ae6e-c4746b2ba055","Type":"ContainerStarted","Data":"3716f10d4acc81aed1a326d17bdf2a6c4a24426e1b55de7b7386cd2e0a36c977"} Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.629745 4757 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-pf59m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.629770 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" podUID="dacc418b-f809-4317-9526-08c5781c6f68" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 15:13:01 crc kubenswrapper[4757]: W0129 15:13:01.638876 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9d54611_82e4_4698_b654_62a1d7144225.slice/crio-ba612900960cb53da5206d5304efd39af924b77d265268e99e6a5c7b3990902b WatchSource:0}: Error finding container ba612900960cb53da5206d5304efd39af924b77d265268e99e6a5c7b3990902b: Status 404 returned error can't find the container with id ba612900960cb53da5206d5304efd39af924b77d265268e99e6a5c7b3990902b Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.671705 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:01 crc kubenswrapper[4757]: E0129 15:13:01.672974 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.172953178 +0000 UTC m=+145.462203415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.775584 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:01 crc kubenswrapper[4757]: E0129 15:13:01.775918 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.275903178 +0000 UTC m=+145.565153415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.808378 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-jkgsj" podStartSLOduration=121.808335927 podStartE2EDuration="2m1.808335927s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:01.80544744 +0000 UTC m=+145.094697677" watchObservedRunningTime="2026-01-29 15:13:01.808335927 +0000 UTC m=+145.097586164" Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.877480 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:01 crc kubenswrapper[4757]: E0129 15:13:01.878318 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.37829741 +0000 UTC m=+145.667547647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:01 crc kubenswrapper[4757]: I0129 15:13:01.978916 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:01 crc kubenswrapper[4757]: E0129 15:13:01.979356 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.479339972 +0000 UTC m=+145.768590219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.080612 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.080792 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.580755185 +0000 UTC m=+145.870005462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.081021 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.081580 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.581559929 +0000 UTC m=+145.870810206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.182504 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.182619 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.682597801 +0000 UTC m=+145.971848038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.183046 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.183583 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.68356954 +0000 UTC m=+145.972819777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.284326 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.284460 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.784437577 +0000 UTC m=+146.073687834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.288407 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.289098 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.789065757 +0000 UTC m=+146.078316034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.390187 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.390344 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.890316995 +0000 UTC m=+146.179567232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.390671 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.390917 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.890910313 +0000 UTC m=+146.180160550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.492928 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.493224 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:02.993202092 +0000 UTC m=+146.282452329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.594076 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.594526 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:03.094514082 +0000 UTC m=+146.383764309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.654897 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" event={"ID":"9dd1f071-c13e-42a1-80bd-81d4121b0cdc","Type":"ContainerStarted","Data":"ecca900e24240b69cedcdb83decdb0a71c6627d0f540498ff093744cd78be8f6"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.660330 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-trnpt" event={"ID":"da973e95-27c1-4f17-87e4-79bf0bc0e0fe","Type":"ContainerStarted","Data":"79b810fbf1c4e443fb8102f403967024c61439acc14abf7abf65c87c2bbd52bd"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.665389 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" event={"ID":"0379bce9-e0c6-4283-8fb4-fcf300dc30bf","Type":"ContainerStarted","Data":"a320fc3d1647529bc085442b82e8f9e5efae45d6774e1e6a6f0f5beff3b01349"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.672469 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" event={"ID":"589345a6-68e3-4e06-bf66-b30c3457f59c","Type":"ContainerStarted","Data":"d6bb8391a709b29e869c03ba7f65d0aff2dea6e0ec40408ad5f02a9eee8d462a"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.673648 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" event={"ID":"818a92e0-3e21-4f17-8950-a74066570368","Type":"ContainerStarted","Data":"29733de1119c1de9aeda89b18dac7452d4043eedf8f61e13717b32027717a349"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.678638 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m9f2c" event={"ID":"1e7cba3a-da69-495d-8f3c-286a75ca8e48","Type":"ContainerStarted","Data":"1a430b648ba5c4d783551bc8425715181a88eda087b3c2e1da77b2f73bc7889c"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.679721 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" event={"ID":"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f","Type":"ContainerStarted","Data":"ae3013a56466cdbdb0326e81c9826dd577640a204d5ab95ecf188652f47ba89c"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.680570 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" event={"ID":"7add9ebb-c4ec-4eed-affb-bdd76b207c29","Type":"ContainerStarted","Data":"b79210df8055fa54b615368c5aab448b6d443201bc65520fa97bf62d42683cd5"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.686361 4757 generic.go:334] "Generic (PLEG): container finished" podID="9e9103bc-a2bb-4075-8454-c6f0af5c2c29" containerID="ee4b738a7027d7a1460dc8c7f6d84e6ae01139aee5b9e3068c6f7bf4715ea1d2" exitCode=0 Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.686420 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" event={"ID":"9e9103bc-a2bb-4075-8454-c6f0af5c2c29","Type":"ContainerDied","Data":"ee4b738a7027d7a1460dc8c7f6d84e6ae01139aee5b9e3068c6f7bf4715ea1d2"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.695213 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.695563 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:03.195546434 +0000 UTC m=+146.484796671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.709800 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-m6jnj" podStartSLOduration=123.709598528 podStartE2EDuration="2m3.709598528s" podCreationTimestamp="2026-01-29 15:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:01.849781529 +0000 UTC m=+145.139031766" watchObservedRunningTime="2026-01-29 15:13:02.709598528 +0000 UTC m=+145.998848765" Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.721677 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xfk54" event={"ID":"ace4ebfd-1a19-4556-a22e-d9cc9ce6d143","Type":"ContainerStarted","Data":"ef2b7bebbc9597164ffb412e5eca6916ae4ef36481ad54a51cd18d513e6fe20e"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.733170 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dkjl5" event={"ID":"88566fd4-0a9f-42dd-a6d5-989dc7176aea","Type":"ContainerStarted","Data":"02dfd25d443ac49cb762c7929b0cb1f59aded97c15f7827e380f4d921dba5f19"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.738443 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" event={"ID":"42aab7ad-1293-4b39-8199-0b7f944a8f31","Type":"ContainerStarted","Data":"5b66943fc40caea793124ece65bb5ece104197c4395d6dd1033077c1c2ad594d"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.738892 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.740167 4757 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8tbgk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.740200 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" podUID="42aab7ad-1293-4b39-8199-0b7f944a8f31" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.744084 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h9rvk" event={"ID":"a5122101-998b-48d5-ae6e-c4746b2ba055","Type":"ContainerStarted","Data":"b2b56d9dfa930bbe6b69411c5e6ffc49d13c4cf07964f330ecd442066d2bc048"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.745072 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" event={"ID":"c8548b94-9099-42d5-914d-c2c10561bc5a","Type":"ContainerStarted","Data":"5619e374f0e7cc7e27e636fdc27c10a0104a6c32f10430a60197161aad763d4e"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.746137 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" event={"ID":"e9d54611-82e4-4698-b654-62a1d7144225","Type":"ContainerStarted","Data":"ba612900960cb53da5206d5304efd39af924b77d265268e99e6a5c7b3990902b"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.752782 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" event={"ID":"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8","Type":"ContainerStarted","Data":"2f0d355b5a5f2c4ff1b64888c803d0ce208206104ba75aa7e19392287827d39f"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.758331 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" event={"ID":"bab27dde-a537-445c-8d39-ad7479b66bcb","Type":"ContainerStarted","Data":"33cc81dd7777e2c32b31a4229d837f06a74ddaafd4031730a37c44967bc9281a"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.759379 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gs77j" event={"ID":"3e6ceaed-34b1-4c4f-abe3-96756d34e30f","Type":"ContainerStarted","Data":"4c3e5699df9d988b871fd9545e38a74e2f5aa771b7af33002fd1e39e868ee87e"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.763082 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" event={"ID":"d68b032e-f86c-4928-a676-03c9e49c6722","Type":"ContainerStarted","Data":"4f7ef3e6aea70420e15440acca913ff3a0e396beb5db2944024a0da5062546df"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.768242 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" event={"ID":"1e93cae3-c9b6-493c-a8cc-c09cc83b0dca","Type":"ContainerStarted","Data":"96971c843f0567887a987e28edafa36ed53144996bbdec02f735bcd86b0d2e77"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.773617 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" event={"ID":"1f404652-4bd9-4720-b625-01ae3c2d29fa","Type":"ContainerStarted","Data":"dbb59830c9c4f61e3a0e686a5487a56f5f221b52efd9d175b81ecdefac8d4c1e"} Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.774247 4757 patch_prober.go:28] interesting pod/console-operator-58897d9998-jkgsj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.774303 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jkgsj" podUID="1fcb1a0e-6108-440c-9ed5-de6ef3d65a5f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.796635 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-mfd9g" podStartSLOduration=6.796618196 podStartE2EDuration="6.796618196s" podCreationTimestamp="2026-01-29 15:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:02.794210884 +0000 UTC m=+146.083461121" watchObservedRunningTime="2026-01-29 15:13:02.796618196 +0000 UTC m=+146.085868433" Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.796832 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" podStartSLOduration=122.796828763 podStartE2EDuration="2m2.796828763s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:02.767478366 +0000 UTC m=+146.056728603" watchObservedRunningTime="2026-01-29 15:13:02.796828763 +0000 UTC m=+146.086079000" Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.798828 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.800312 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:03.300250496 +0000 UTC m=+146.589500733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.817708 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qzc6g" podStartSLOduration=122.817691413 podStartE2EDuration="2m2.817691413s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:02.816208628 +0000 UTC m=+146.105458865" watchObservedRunningTime="2026-01-29 15:13:02.817691413 +0000 UTC m=+146.106941650" Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.899607 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.899740 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:03.39971589 +0000 UTC m=+146.688966127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:02 crc kubenswrapper[4757]: I0129 15:13:02.901108 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:02 crc kubenswrapper[4757]: E0129 15:13:02.901742 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:03.401726861 +0000 UTC m=+146.690977098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.002673 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.002990 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:03.502973499 +0000 UTC m=+146.792223726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.104401 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.105010 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:03.60499784 +0000 UTC m=+146.894248077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.205645 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.206075 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:03.706055531 +0000 UTC m=+146.995305768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.314019 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.314385 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:03.814369413 +0000 UTC m=+147.103619650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.415043 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.415142 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:03.915124266 +0000 UTC m=+147.204374503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.415355 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.415644 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:03.915636931 +0000 UTC m=+147.204887168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.516307 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.516500 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.016472427 +0000 UTC m=+147.305722664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.516655 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.516937 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.016925581 +0000 UTC m=+147.306175808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.617819 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.617962 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.117937741 +0000 UTC m=+147.407187978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.618158 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.618436 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.118428806 +0000 UTC m=+147.407679033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.719791 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.719956 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.219935672 +0000 UTC m=+147.509185919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.720018 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.720390 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.220381166 +0000 UTC m=+147.509631403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.785606 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pvt9r" event={"ID":"616e840f-aaeb-48cc-b979-f690d54a8c95","Type":"ContainerStarted","Data":"7606ad0b70d3d3560a33f3bf30fb02310403eca9bd9ee46714fa1a58db667778"} Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.787300 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" event={"ID":"69d5d601-aa72-4044-ad14-81c12a34c8f0","Type":"ContainerStarted","Data":"371949d1bc3d58a5f453af4644637ed2374a60d67ba76df30d15abbeace74dc4"} Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.789082 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" event={"ID":"bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a","Type":"ContainerStarted","Data":"5e60701035924b5f187ee341f82f8cb96133a131226a3245c43b83449d41b516"} Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.790477 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" event={"ID":"08b2186f-939e-4005-9fd9-1f1cc7b087d8","Type":"ContainerStarted","Data":"9e8fac48b1c68d634af824d09bc7bd6659d58acdefecfdd4e971f3c7ddcb3b90"} Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.792381 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" event={"ID":"905f21b5-42ca-4558-b66c-b957fd41c9e8","Type":"ContainerStarted","Data":"ab0667ef60a793ee4c74ec655b6141e297e5d14b5f8a2039b2e7fc9af298505e"} Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.792860 4757 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8tbgk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.792902 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" podUID="42aab7ad-1293-4b39-8199-0b7f944a8f31" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.810879 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h9rvk" podStartSLOduration=123.810861448 podStartE2EDuration="2m3.810861448s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:03.808243689 +0000 UTC m=+147.097493926" watchObservedRunningTime="2026-01-29 15:13:03.810861448 +0000 UTC m=+147.100111685" Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.820864 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.821134 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.321116148 +0000 UTC m=+147.610366385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.821493 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.821816 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.321807189 +0000 UTC m=+147.611057636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:03 crc kubenswrapper[4757]: I0129 15:13:03.922734 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:03 crc kubenswrapper[4757]: E0129 15:13:03.923136 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.423093148 +0000 UTC m=+147.712343385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.024970 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.025478 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.52545664 +0000 UTC m=+147.814706887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.126136 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.126535 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.626508142 +0000 UTC m=+147.915758369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.227602 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.228036 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.728013707 +0000 UTC m=+148.017264014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.328933 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.329405 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.829375779 +0000 UTC m=+148.118626036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.430709 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.431261 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:04.931232005 +0000 UTC m=+148.220482282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.531828 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.532011 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.031978618 +0000 UTC m=+148.321228855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.533013 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.533321 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.033303148 +0000 UTC m=+148.322553385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.549338 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.552429 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.552591 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.634120 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.634304 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.134259267 +0000 UTC m=+148.423509514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.634628 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.634907 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.134898937 +0000 UTC m=+148.424149164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.735392 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.735698 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.23564902 +0000 UTC m=+148.524899307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.735792 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.736840 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.236817995 +0000 UTC m=+148.526068272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.798128 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" event={"ID":"c20ebd50-0f39-4321-84c3-1806672c78c0","Type":"ContainerStarted","Data":"cc55cc51e93c01b9d49fc2b5473e3b11e21685f547720d35c4f026a6b1197caa"} Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.799719 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-skxmw" event={"ID":"a0f71154-b1ff-4e61-9c93-8bcb95678bce","Type":"ContainerStarted","Data":"7d99940842de2d9a9f4d7a3901a12cb270cad2378f3a2ced7d21e743ecf4bec7"} Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.837718 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.837853 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.337836436 +0000 UTC m=+148.627086673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.838035 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.838340 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.338331671 +0000 UTC m=+148.627581908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.939715 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.939945 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.439922879 +0000 UTC m=+148.729173116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:04 crc kubenswrapper[4757]: I0129 15:13:04.940423 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:04 crc kubenswrapper[4757]: E0129 15:13:04.941175 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.441163217 +0000 UTC m=+148.730413454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.041322 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.041440 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.541421785 +0000 UTC m=+148.830672022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.041521 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.041761 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.541754225 +0000 UTC m=+148.831004462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.142451 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.142665 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.642644092 +0000 UTC m=+148.931894329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.143048 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.143403 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.643395125 +0000 UTC m=+148.932645362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.244480 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.244924 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.74490628 +0000 UTC m=+149.034156517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.346914 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.347388 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.847367475 +0000 UTC m=+149.136617712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.448362 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.448509 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.948480759 +0000 UTC m=+149.237730996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.448628 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.448909 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:05.948900392 +0000 UTC m=+149.238150619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.549415 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.549546 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.049527981 +0000 UTC m=+149.338778218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.549654 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.549989 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.049976484 +0000 UTC m=+149.339226731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.651208 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.651685 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.151665426 +0000 UTC m=+149.440915663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.683083 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:05 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:05 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:05 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.683169 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.753049 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.753417 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.253403018 +0000 UTC m=+149.542653255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.811182 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" event={"ID":"589345a6-68e3-4e06-bf66-b30c3457f59c","Type":"ContainerStarted","Data":"210b0f77aa50bb460d1684e439a8bcf709cd4e845881d23799830070f52e85fa"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.812558 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" event={"ID":"8351b0cf-f243-4fe3-ba94-30f3ee17320e","Type":"ContainerStarted","Data":"43ffc3e09958738521b3e1070857db4a571a419a62dc3b87455ae88edaa63c2f"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.813818 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" event={"ID":"0379bce9-e0c6-4283-8fb4-fcf300dc30bf","Type":"ContainerStarted","Data":"8cbc99427dd8cfbd8d07ebb0ed125ef78f2536841e30ff757a12cb8f119e171d"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.815124 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" event={"ID":"8119cbd7-40a4-4875-b49c-1e982ec9acd8","Type":"ContainerStarted","Data":"96d18dab0e7967c6f0d707bf69df8ffd9e4e2fc8ea8588c70d85963a8a1b989b"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.816329 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" event={"ID":"7add9ebb-c4ec-4eed-affb-bdd76b207c29","Type":"ContainerStarted","Data":"0c25f48c8519034b3e1bbd060f6eead5019d9c2ec0b55f2cbb5bae61c9016bf2"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.817449 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" event={"ID":"c8548b94-9099-42d5-914d-c2c10561bc5a","Type":"ContainerStarted","Data":"70445d3a6be4b1bc25e607c9d71e752774df96544d331f1b0f373c0d9ffd4967"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.818383 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dkjl5" event={"ID":"88566fd4-0a9f-42dd-a6d5-989dc7176aea","Type":"ContainerStarted","Data":"820af725069ad9c3e4b9981faed3cb3f3f8bcc261ac123d699ba26d6c3d3593d"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.819461 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" event={"ID":"818a92e0-3e21-4f17-8950-a74066570368","Type":"ContainerStarted","Data":"f149bbc0b9e1b188949cc29ba75109e12728113bed5159ed98d243938d5d61d1"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.821663 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gs77j" event={"ID":"3e6ceaed-34b1-4c4f-abe3-96756d34e30f","Type":"ContainerStarted","Data":"d0bc679c69ac0e94034d7862caaaad4a3c9d97f01c26e41dd5cb4aff7667cfa3"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.824050 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" event={"ID":"9e9103bc-a2bb-4075-8454-c6f0af5c2c29","Type":"ContainerStarted","Data":"5bbf4832c3270f92aea815b34c43a13dc9e5f8e3c43b61e44577381b7d7cc997"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.829979 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" event={"ID":"e9d54611-82e4-4698-b654-62a1d7144225","Type":"ContainerStarted","Data":"fa050baf64540fd87207c12d8c741141a192ddc36a36d1622a0c24bb548c888e"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.831503 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" event={"ID":"bcb581d1-4a29-4bf3-9df9-1669cf88e9f3","Type":"ContainerStarted","Data":"65de4d5df7bf4e85284a6fa42848b2fd035a4e23cf48329c3b46721fca247e57"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.833458 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" event={"ID":"f10cf2ea-d11c-422e-9f8e-b93d422df097","Type":"ContainerStarted","Data":"ef7e60cae2017ba3cf0de1e2e8548dd8874a6f817a1b6a6de4bcf2e161a03664"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.836732 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt" event={"ID":"ea61811e-2455-4157-a3f3-1376f4a11e8c","Type":"ContainerStarted","Data":"0738e0d0cc0c8fca3f4cc4eba92e8cb01a198b251a755ceecdb04327e89482e0"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.838312 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" event={"ID":"6be95c99-c279-4066-a0c6-b1499d8f7e07","Type":"ContainerStarted","Data":"4306a527675e2e458ad593f61a9e455ccda37a8470f161114eb2266d016331cf"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.839730 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m9f2c" event={"ID":"1e7cba3a-da69-495d-8f3c-286a75ca8e48","Type":"ContainerStarted","Data":"0b2ef8ee1add2f6f39f1a8bb76424be78ba4df5a686025f8f644f1d3c038ab38"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.842350 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" event={"ID":"cd5cf88d-a64c-4ff5-bffb-bc35d08e22e8","Type":"ContainerStarted","Data":"7042e5aa25db8fd06e09e44b827d40637736f3d01d7c4dfa96f77f7f606e12ed"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.843578 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" event={"ID":"d68b032e-f86c-4928-a676-03c9e49c6722","Type":"ContainerStarted","Data":"648b1da2c0ca3898bfaae4861790da0c26c99c96b6fe560352e8cbec0fed5ada"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.845185 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xfk54" event={"ID":"ace4ebfd-1a19-4556-a22e-d9cc9ce6d143","Type":"ContainerStarted","Data":"230ca3e07a5a8b8a71d3696f2d9a58aec6d5d6fa37412d201297f5843e86156c"} Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.847859 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.849233 4757 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wnhtd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.849426 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" podUID="905f21b5-42ca-4558-b66c-b957fd41c9e8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.853689 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.854022 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.353992907 +0000 UTC m=+149.643243144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.854302 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.854752 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.354741139 +0000 UTC m=+149.643991376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.887009 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pw8nl" podStartSLOduration=125.886989113 podStartE2EDuration="2m5.886989113s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:05.866502684 +0000 UTC m=+149.155752921" watchObservedRunningTime="2026-01-29 15:13:05.886989113 +0000 UTC m=+149.176239350" Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.888467 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" podStartSLOduration=125.888449797 podStartE2EDuration="2m5.888449797s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:05.887623272 +0000 UTC m=+149.176873509" watchObservedRunningTime="2026-01-29 15:13:05.888449797 +0000 UTC m=+149.177700034" Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.902745 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-skxmw" podStartSLOduration=125.902728799 podStartE2EDuration="2m5.902728799s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:05.901919094 +0000 UTC m=+149.191169321" watchObservedRunningTime="2026-01-29 15:13:05.902728799 +0000 UTC m=+149.191979036" Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.921838 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" podStartSLOduration=125.921817255 podStartE2EDuration="2m5.921817255s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:05.919952219 +0000 UTC m=+149.209202476" watchObservedRunningTime="2026-01-29 15:13:05.921817255 +0000 UTC m=+149.211067492" Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.935918 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-z9qzn" podStartSLOduration=125.93589959 podStartE2EDuration="2m5.93589959s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:05.935723785 +0000 UTC m=+149.224974022" watchObservedRunningTime="2026-01-29 15:13:05.93589959 +0000 UTC m=+149.225149827" Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.958173 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:05 crc kubenswrapper[4757]: E0129 15:13:05.960366 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.460345519 +0000 UTC m=+149.749595756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:05 crc kubenswrapper[4757]: I0129 15:13:05.970942 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jx6g5" podStartSLOduration=125.970918698 podStartE2EDuration="2m5.970918698s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:05.954692138 +0000 UTC m=+149.243942375" watchObservedRunningTime="2026-01-29 15:13:05.970918698 +0000 UTC m=+149.260168925" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.060539 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.060918 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.560902286 +0000 UTC m=+149.850152523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.161334 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.161674 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.661659249 +0000 UTC m=+149.950909486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.262709 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.263325 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.763308519 +0000 UTC m=+150.052558756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.363750 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.364071 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.864052202 +0000 UTC m=+150.153302439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.364108 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.364416 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.864408263 +0000 UTC m=+150.153658500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.465059 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.465246 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.965218307 +0000 UTC m=+150.254468544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.465304 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.465354 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.465379 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.465418 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.465448 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.465682 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:06.965670941 +0000 UTC m=+150.254921178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.470763 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.470940 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.485784 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.552240 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:06 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:06 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:06 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.552571 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.566214 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.566420 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:07.066401613 +0000 UTC m=+150.355651850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.566506 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.567027 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:07.067020252 +0000 UTC m=+150.356270489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.624422 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.667431 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.668688 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:07.168660822 +0000 UTC m=+150.457911059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.715067 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.724701 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.732981 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.769048 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.769580 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:07.269558148 +0000 UTC m=+150.558808445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.852648 4757 generic.go:334] "Generic (PLEG): container finished" podID="0b0330c1-19bb-492e-815a-2827e5749d68" containerID="8640b96ab6fe0b53edad492fb161040a6cd6cd698c6d94ba1ad9a968884403a4" exitCode=0 Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.853671 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zrp48" event={"ID":"0b0330c1-19bb-492e-815a-2827e5749d68","Type":"ContainerDied","Data":"8640b96ab6fe0b53edad492fb161040a6cd6cd698c6d94ba1ad9a968884403a4"} Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.855881 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.856034 4757 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n44qs container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.856132 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" podUID="6be95c99-c279-4066-a0c6-b1499d8f7e07" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.856496 4757 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wnhtd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.856568 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" podUID="905f21b5-42ca-4558-b66c-b957fd41c9e8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.871044 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.871212 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:07.371177917 +0000 UTC m=+150.660428164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.871584 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.871913 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:07.371903179 +0000 UTC m=+150.661153406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.894988 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" podStartSLOduration=126.894971226 podStartE2EDuration="2m6.894971226s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:06.89411823 +0000 UTC m=+150.183368467" watchObservedRunningTime="2026-01-29 15:13:06.894971226 +0000 UTC m=+150.184221463" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.922384 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-m7q76" podStartSLOduration=126.922339203 podStartE2EDuration="2m6.922339203s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:06.918219448 +0000 UTC m=+150.207469705" watchObservedRunningTime="2026-01-29 15:13:06.922339203 +0000 UTC m=+150.211589440" Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.972924 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.973097 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:07.473068285 +0000 UTC m=+150.762318522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:06 crc kubenswrapper[4757]: I0129 15:13:06.975390 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:06 crc kubenswrapper[4757]: E0129 15:13:06.976233 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:07.47621905 +0000 UTC m=+150.765469287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.040360 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2g78z" podStartSLOduration=130.040334696 podStartE2EDuration="2m10.040334696s" podCreationTimestamp="2026-01-29 15:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:06.950959467 +0000 UTC m=+150.240209704" watchObservedRunningTime="2026-01-29 15:13:07.040334696 +0000 UTC m=+150.329584943" Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.076171 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:07 crc kubenswrapper[4757]: E0129 15:13:07.076553 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:07.57653588 +0000 UTC m=+150.865786117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.177907 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:07 crc kubenswrapper[4757]: E0129 15:13:07.178335 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:07.678319514 +0000 UTC m=+150.967569761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.278375 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:07 crc kubenswrapper[4757]: E0129 15:13:07.278725 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:07.778710266 +0000 UTC m=+151.067960503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.380957 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:07 crc kubenswrapper[4757]: E0129 15:13:07.381239 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:07.881225142 +0000 UTC m=+151.170475379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.482166 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:07 crc kubenswrapper[4757]: E0129 15:13:07.482802 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:07.98278517 +0000 UTC m=+151.272035407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.552924 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:07 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:07 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:07 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.552978 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.583887 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:07 crc kubenswrapper[4757]: E0129 15:13:07.584377 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:08.084360938 +0000 UTC m=+151.373611175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.684884 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:07 crc kubenswrapper[4757]: E0129 15:13:07.685073 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:08.185043378 +0000 UTC m=+151.474293635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.685164 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:07 crc kubenswrapper[4757]: E0129 15:13:07.685499 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:08.185484272 +0000 UTC m=+151.474734509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.786777 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:07 crc kubenswrapper[4757]: E0129 15:13:07.786967 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:08.286940626 +0000 UTC m=+151.576190853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.787029 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:07 crc kubenswrapper[4757]: E0129 15:13:07.787358 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:08.287344828 +0000 UTC m=+151.576595065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.858437 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0575f6cdaad4d67a15c95c02305bd1327accfe96c61c3f3533ab1aa2c0d95e0d"} Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.859696 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7d400cf0b09155273bea4312d54df073eb34aad74555656aff187fd05346fb77"} Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.860712 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"99bfbf868c24f1deed2b9915ec378b1bfc12922c0ce5865620e8d597071f2d9a"} Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.862832 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" event={"ID":"9dd1f071-c13e-42a1-80bd-81d4121b0cdc","Type":"ContainerStarted","Data":"28cae56e75ee6fad5d81047c0988cf650fdc33793453df5a3cdc36d8dbbcc887"} Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.864424 4757 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n44qs container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.864503 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" podUID="6be95c99-c279-4066-a0c6-b1499d8f7e07" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.869751 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.870486 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.872562 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.872800 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.879159 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.887700 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:07 crc kubenswrapper[4757]: E0129 15:13:07.888042 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:08.388028589 +0000 UTC m=+151.677278826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.890375 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bkltp" podStartSLOduration=127.890245306 podStartE2EDuration="2m7.890245306s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:07.883858353 +0000 UTC m=+151.173108630" watchObservedRunningTime="2026-01-29 15:13:07.890245306 +0000 UTC m=+151.179495583" Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.911492 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" podStartSLOduration=128.911475617 podStartE2EDuration="2m8.911475617s" podCreationTimestamp="2026-01-29 15:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:07.911353564 +0000 UTC m=+151.200603821" watchObservedRunningTime="2026-01-29 15:13:07.911475617 +0000 UTC m=+151.200725854" Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.946390 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xfk54" podStartSLOduration=11.946371421 podStartE2EDuration="11.946371421s" podCreationTimestamp="2026-01-29 15:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:07.944698051 +0000 UTC m=+151.233948288" watchObservedRunningTime="2026-01-29 15:13:07.946371421 +0000 UTC m=+151.235621658" Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.971634 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpcpn" podStartSLOduration=127.971612304 podStartE2EDuration="2m7.971612304s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:07.971539541 +0000 UTC m=+151.260789788" watchObservedRunningTime="2026-01-29 15:13:07.971612304 +0000 UTC m=+151.260862561" Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.990805 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.991148 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7fb8938-c31a-4dba-9d00-e6b165b5ad13-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b7fb8938-c31a-4dba-9d00-e6b165b5ad13\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:13:07 crc kubenswrapper[4757]: I0129 15:13:07.991254 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7fb8938-c31a-4dba-9d00-e6b165b5ad13-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b7fb8938-c31a-4dba-9d00-e6b165b5ad13\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:13:07 crc kubenswrapper[4757]: E0129 15:13:07.993001 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:08.492988919 +0000 UTC m=+151.782239156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.009398 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x9p7l" podStartSLOduration=128.009379484 podStartE2EDuration="2m8.009379484s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:08.003723234 +0000 UTC m=+151.292973481" watchObservedRunningTime="2026-01-29 15:13:08.009379484 +0000 UTC m=+151.298629721" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.092670 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.092907 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7fb8938-c31a-4dba-9d00-e6b165b5ad13-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b7fb8938-c31a-4dba-9d00-e6b165b5ad13\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.092935 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7fb8938-c31a-4dba-9d00-e6b165b5ad13-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b7fb8938-c31a-4dba-9d00-e6b165b5ad13\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:13:08 crc kubenswrapper[4757]: E0129 15:13:08.094427 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:08.594402612 +0000 UTC m=+151.883652849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.094993 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7fb8938-c31a-4dba-9d00-e6b165b5ad13-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b7fb8938-c31a-4dba-9d00-e6b165b5ad13\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.179919 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7fb8938-c31a-4dba-9d00-e6b165b5ad13-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b7fb8938-c31a-4dba-9d00-e6b165b5ad13\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.180829 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-plpn9" podStartSLOduration=128.180811372 podStartE2EDuration="2m8.180811372s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:08.094577168 +0000 UTC m=+151.383827405" watchObservedRunningTime="2026-01-29 15:13:08.180811372 +0000 UTC m=+151.470061609" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.198941 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:08 crc kubenswrapper[4757]: E0129 15:13:08.199576 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:08.699563908 +0000 UTC m=+151.988814135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.206998 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.300537 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:08 crc kubenswrapper[4757]: E0129 15:13:08.301112 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:08.801096885 +0000 UTC m=+152.090347122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.400482 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.400530 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.401802 4757 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-zzvjx container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.401847 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" podUID="bfbebfa7-0248-4fbc-b72a-60c4ba43bb0a" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.402279 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:08 crc kubenswrapper[4757]: E0129 15:13:08.402613 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:08.902600381 +0000 UTC m=+152.191850618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.418539 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.560999 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:08 crc kubenswrapper[4757]: E0129 15:13:08.561540 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.06151153 +0000 UTC m=+152.350761767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.562062 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:08 crc kubenswrapper[4757]: E0129 15:13:08.562502 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.0624874 +0000 UTC m=+152.351737637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.576121 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:08 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:08 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:08 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.576357 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.578375 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:13:08 crc kubenswrapper[4757]: W0129 15:13:08.610813 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb7fb8938_c31a_4dba_9d00_e6b165b5ad13.slice/crio-c15707bbed434a047efab7344bdea115c36b839106d53f0ced94f5a6be8040d4 WatchSource:0}: Error finding container c15707bbed434a047efab7344bdea115c36b839106d53f0ced94f5a6be8040d4: Status 404 returned error can't find the container with id c15707bbed434a047efab7344bdea115c36b839106d53f0ced94f5a6be8040d4 Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.663021 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:08 crc kubenswrapper[4757]: E0129 15:13:08.663675 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.163655425 +0000 UTC m=+152.452905662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.739564 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-jkgsj" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.764312 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:08 crc kubenswrapper[4757]: E0129 15:13:08.764682 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.264662886 +0000 UTC m=+152.553913133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.865070 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:08 crc kubenswrapper[4757]: E0129 15:13:08.865258 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.365225563 +0000 UTC m=+152.654475800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.865393 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:08 crc kubenswrapper[4757]: E0129 15:13:08.865683 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.365671647 +0000 UTC m=+152.654921884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.869905 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" event={"ID":"dd74c040-9e89-4c40-8e16-5ae6c0f6e65f","Type":"ContainerStarted","Data":"e0b3df785f1b2b3c6ab2f6d4f5fd127a27e26dd2345350645effb62551ce596a"} Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.871031 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-trnpt" event={"ID":"da973e95-27c1-4f17-87e4-79bf0bc0e0fe","Type":"ContainerStarted","Data":"934feb182fdb66bc63ce46d2f85cba8e919f2b91626d501503c3e50b001436a9"} Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.872067 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pvt9r" event={"ID":"616e840f-aaeb-48cc-b979-f690d54a8c95","Type":"ContainerStarted","Data":"7456af03d58261bba2d453addfa9c2e263218243cf01b2435d9d24b36faa0e80"} Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.872650 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b7fb8938-c31a-4dba-9d00-e6b165b5ad13","Type":"ContainerStarted","Data":"c15707bbed434a047efab7344bdea115c36b839106d53f0ced94f5a6be8040d4"} Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.873840 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-m9f2c" event={"ID":"1e7cba3a-da69-495d-8f3c-286a75ca8e48","Type":"ContainerStarted","Data":"638c3aa3018eb8bdee3c76c5a299a4e8eb4f736c711d5d7903bfe1edd9aa1efc"} Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.875867 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" event={"ID":"08b2186f-939e-4005-9fd9-1f1cc7b087d8","Type":"ContainerStarted","Data":"df592c9389cc57d82c7575a25a85817cd27171e6b5136948ad491095ccddf673"} Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.879294 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.879320 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.879330 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gs77j" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.879338 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.879347 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.886303 4757 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-dz9cf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.886329 4757 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mg555 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.29:6443/healthz\": dial tcp 10.217.0.29:6443: connect: connection refused" start-of-body= Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.886307 4757 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hghqd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.886363 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" podUID="7add9ebb-c4ec-4eed-affb-bdd76b207c29" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.886381 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" podUID="e9d54611-82e4-4698-b654-62a1d7144225" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.29:6443/healthz\": dial tcp 10.217.0.29:6443: connect: connection refused" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.886416 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" podUID="9e9103bc-a2bb-4075-8454-c6f0af5c2c29" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.886646 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.886713 4757 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-8tbgk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.886741 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" podUID="42aab7ad-1293-4b39-8199-0b7f944a8f31" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.886715 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.886796 4757 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-grbn4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.886814 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" podUID="d68b032e-f86c-4928-a676-03c9e49c6722" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.907552 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" podStartSLOduration=128.907534921 podStartE2EDuration="2m8.907534921s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:08.907184571 +0000 UTC m=+152.196434808" watchObservedRunningTime="2026-01-29 15:13:08.907534921 +0000 UTC m=+152.196785158" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.947601 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2x58v" podStartSLOduration=128.947583351 podStartE2EDuration="2m8.947583351s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:08.94689564 +0000 UTC m=+152.236145877" watchObservedRunningTime="2026-01-29 15:13:08.947583351 +0000 UTC m=+152.236833578" Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.967028 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:08 crc kubenswrapper[4757]: E0129 15:13:08.967155 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.467124471 +0000 UTC m=+152.756374708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:08 crc kubenswrapper[4757]: I0129 15:13:08.967201 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:08 crc kubenswrapper[4757]: E0129 15:13:08.967680 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.467660827 +0000 UTC m=+152.756911134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.001854 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-gs77j" podStartSLOduration=129.001834349 podStartE2EDuration="2m9.001834349s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:09.000497669 +0000 UTC m=+152.289747906" watchObservedRunningTime="2026-01-29 15:13:09.001834349 +0000 UTC m=+152.291084586" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.002909 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" podStartSLOduration=130.002903612 podStartE2EDuration="2m10.002903612s" podCreationTimestamp="2026-01-29 15:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:08.981855166 +0000 UTC m=+152.271105413" watchObservedRunningTime="2026-01-29 15:13:09.002903612 +0000 UTC m=+152.292153849" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.064052 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h7pw2" podStartSLOduration=130.064035948 podStartE2EDuration="2m10.064035948s" podCreationTimestamp="2026-01-29 15:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:09.063059208 +0000 UTC m=+152.352309445" watchObservedRunningTime="2026-01-29 15:13:09.064035948 +0000 UTC m=+152.353286185" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.065316 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-k5nbp" podStartSLOduration=129.065308046 podStartE2EDuration="2m9.065308046s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:09.016701428 +0000 UTC m=+152.305951665" watchObservedRunningTime="2026-01-29 15:13:09.065308046 +0000 UTC m=+152.354558283" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.068577 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.068666 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.568650767 +0000 UTC m=+152.857900994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.068782 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.069110 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.569095551 +0000 UTC m=+152.858345788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.081806 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" podStartSLOduration=129.081776714 podStartE2EDuration="2m9.081776714s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:09.081242208 +0000 UTC m=+152.370492445" watchObservedRunningTime="2026-01-29 15:13:09.081776714 +0000 UTC m=+152.371026951" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.088657 4757 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wnhtd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.088690 4757 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wnhtd container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.088708 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" podUID="905f21b5-42ca-4558-b66c-b957fd41c9e8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.088782 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" podUID="905f21b5-42ca-4558-b66c-b957fd41c9e8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": dial tcp 10.217.0.28:5443: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.111494 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" podStartSLOduration=129.11146281 podStartE2EDuration="2m9.11146281s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:09.108377137 +0000 UTC m=+152.397627384" watchObservedRunningTime="2026-01-29 15:13:09.11146281 +0000 UTC m=+152.400713047" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.140880 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvpbt" podStartSLOduration=130.140860908 podStartE2EDuration="2m10.140860908s" podCreationTimestamp="2026-01-29 15:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:09.137347162 +0000 UTC m=+152.426597399" watchObservedRunningTime="2026-01-29 15:13:09.140860908 +0000 UTC m=+152.430111145" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.170234 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.170386 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.670367229 +0000 UTC m=+152.959617466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.170705 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.171054 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.67104249 +0000 UTC m=+152.960292797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.271600 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.271734 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.77171493 +0000 UTC m=+153.060965167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.271804 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.272063 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.772055841 +0000 UTC m=+153.061306078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.277841 4757 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n44qs container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.277866 4757 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n44qs container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.277922 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" podUID="6be95c99-c279-4066-a0c6-b1499d8f7e07" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.277875 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" podUID="6be95c99-c279-4066-a0c6-b1499d8f7e07" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.357205 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.372863 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.373339 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.873324619 +0000 UTC m=+153.162574856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.452338 4757 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mg555 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.29:6443/healthz\": dial tcp 10.217.0.29:6443: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.452388 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" podUID="e9d54611-82e4-4698-b654-62a1d7144225" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.29:6443/healthz\": dial tcp 10.217.0.29:6443: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.474622 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.475003 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:09.97498494 +0000 UTC m=+153.264235177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.533406 4757 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-dz9cf container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.533471 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" podUID="7add9ebb-c4ec-4eed-affb-bdd76b207c29" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.533423 4757 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-dz9cf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.533679 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" podUID="7add9ebb-c4ec-4eed-affb-bdd76b207c29" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.549339 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.553364 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:09 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:09 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:09 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.553420 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.558600 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.558668 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.560542 4757 patch_prober.go:28] interesting pod/console-f9d7485db-skxmw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.560594 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-skxmw" podUID="a0f71154-b1ff-4e61-9c93-8bcb95678bce" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.575918 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.576049 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.076021241 +0000 UTC m=+153.365271478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.576240 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.576915 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.076903308 +0000 UTC m=+153.366153635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.581627 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.581669 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.581633 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.581745 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.677545 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.677789 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.177751184 +0000 UTC m=+153.467001441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.677881 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.678219 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.178207568 +0000 UTC m=+153.467457825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.708820 4757 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-grbn4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.708876 4757 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-grbn4 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.708885 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" podUID="d68b032e-f86c-4928-a676-03c9e49c6722" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.708955 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" podUID="d68b032e-f86c-4928-a676-03c9e49c6722" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.779011 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.779202 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.279175767 +0000 UTC m=+153.568426004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.779364 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.779682 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.279670372 +0000 UTC m=+153.568920609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.889973 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" event={"ID":"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2","Type":"ContainerStarted","Data":"000bca789d9b6e60c88aff1e53ec576749e0a6849a5ee32732720490060859a8"} Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.890678 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.891221 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.391202161 +0000 UTC m=+153.680452398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.892801 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4eab09f75eb1c2dc8e1978865d67b158b19e69567a9b697eea9c46f8b0ed4ad5"} Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.893494 4757 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-dz9cf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.893532 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" podUID="7add9ebb-c4ec-4eed-affb-bdd76b207c29" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.893857 4757 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mg555 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.29:6443/healthz\": dial tcp 10.217.0.29:6443: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.893878 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" podUID="e9d54611-82e4-4698-b654-62a1d7144225" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.29:6443/healthz\": dial tcp 10.217.0.29:6443: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.894179 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.894200 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.894690 4757 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-grbn4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.894710 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" podUID="d68b032e-f86c-4928-a676-03c9e49c6722" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.896521 4757 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hghqd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.896657 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" podUID="9e9103bc-a2bb-4075-8454-c6f0af5c2c29" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.910236 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-lhw6r" podStartSLOduration=129.910215825 podStartE2EDuration="2m9.910215825s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:09.907128672 +0000 UTC m=+153.196378909" watchObservedRunningTime="2026-01-29 15:13:09.910215825 +0000 UTC m=+153.199466062" Jan 29 15:13:09 crc kubenswrapper[4757]: I0129 15:13:09.991975 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:09 crc kubenswrapper[4757]: E0129 15:13:09.993635 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.493619414 +0000 UTC m=+153.782869651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.093345 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:10 crc kubenswrapper[4757]: E0129 15:13:10.093734 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.593718827 +0000 UTC m=+153.882969064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.195225 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:10 crc kubenswrapper[4757]: E0129 15:13:10.195802 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.69579088 +0000 UTC m=+153.985041117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.296495 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:10 crc kubenswrapper[4757]: E0129 15:13:10.296624 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.796605105 +0000 UTC m=+154.085855342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.296889 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:10 crc kubenswrapper[4757]: E0129 15:13:10.297189 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.797180493 +0000 UTC m=+154.086430730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.397629 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:10 crc kubenswrapper[4757]: E0129 15:13:10.398098 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.898082279 +0000 UTC m=+154.187332516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.499416 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:10 crc kubenswrapper[4757]: E0129 15:13:10.499855 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:10.999840023 +0000 UTC m=+154.289090260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.553149 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:10 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:10 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:10 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.553216 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.600512 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:10 crc kubenswrapper[4757]: E0129 15:13:10.600683 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:11.100663168 +0000 UTC m=+154.389913405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.601002 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:10 crc kubenswrapper[4757]: E0129 15:13:10.601428 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:11.101411501 +0000 UTC m=+154.390661738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.701699 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:10 crc kubenswrapper[4757]: E0129 15:13:10.701868 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:11.201851314 +0000 UTC m=+154.491101551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.702084 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:10 crc kubenswrapper[4757]: E0129 15:13:10.702453 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:11.202444992 +0000 UTC m=+154.491695229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.803876 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:10 crc kubenswrapper[4757]: E0129 15:13:10.804144 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:11.304079112 +0000 UTC m=+154.593329349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.835671 4757 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hghqd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.835709 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" podUID="9e9103bc-a2bb-4075-8454-c6f0af5c2c29" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.835871 4757 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hghqd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.835906 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" podUID="9e9103bc-a2bb-4075-8454-c6f0af5c2c29" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.897577 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"dc95e29137b4006bd72a08910c4a6c3def58b6cf39dbb2a4574f1431ee977f70"} Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.898910 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"80e97e550f7551e9fe13a8003107c38bf8abd6c722d647c337b3952721214f6d"} Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.900540 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dkjl5" event={"ID":"88566fd4-0a9f-42dd-a6d5-989dc7176aea","Type":"ContainerStarted","Data":"e06c48df5069596638c24680d4c565e08023ca1ff8d063f987df66dfa05f4c97"} Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.901884 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" event={"ID":"f10cf2ea-d11c-422e-9f8e-b93d422df097","Type":"ContainerStarted","Data":"b41d1553aeaf893038310b0facbe8ce78d2c540f945448541abcb75cdd390535"} Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.902454 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.904708 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:10 crc kubenswrapper[4757]: E0129 15:13:10.904997 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:11.404985319 +0000 UTC m=+154.694235556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:10 crc kubenswrapper[4757]: I0129 15:13:10.936975 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-trnpt" podStartSLOduration=130.936957415 podStartE2EDuration="2m10.936957415s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:09.93090864 +0000 UTC m=+153.220158877" watchObservedRunningTime="2026-01-29 15:13:10.936957415 +0000 UTC m=+154.226207652" Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.006169 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:11 crc kubenswrapper[4757]: E0129 15:13:11.006991 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:11.506959539 +0000 UTC m=+154.796209776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.016620 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-pvt9r" podStartSLOduration=131.016600971 podStartE2EDuration="2m11.016600971s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:10.974168609 +0000 UTC m=+154.263418846" watchObservedRunningTime="2026-01-29 15:13:11.016600971 +0000 UTC m=+154.305851208" Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.057496 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-m9f2c" podStartSLOduration=15.057476705 podStartE2EDuration="15.057476705s" podCreationTimestamp="2026-01-29 15:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:11.056542777 +0000 UTC m=+154.345793024" watchObservedRunningTime="2026-01-29 15:13:11.057476705 +0000 UTC m=+154.346726942" Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.074515 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tvv95" podStartSLOduration=131.074495769 podStartE2EDuration="2m11.074495769s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:11.073295173 +0000 UTC m=+154.362545410" watchObservedRunningTime="2026-01-29 15:13:11.074495769 +0000 UTC m=+154.363746006" Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.108015 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:11 crc kubenswrapper[4757]: E0129 15:13:11.108362 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:11.608346652 +0000 UTC m=+154.897596889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.208960 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:11 crc kubenswrapper[4757]: E0129 15:13:11.209185 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:11.709145326 +0000 UTC m=+154.998395563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.209243 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:11 crc kubenswrapper[4757]: E0129 15:13:11.209595 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:11.709577659 +0000 UTC m=+154.998827896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.310609 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:11 crc kubenswrapper[4757]: E0129 15:13:11.310817 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:11.810796536 +0000 UTC m=+155.100046843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.311041 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:11 crc kubenswrapper[4757]: E0129 15:13:11.311425 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:11.811414315 +0000 UTC m=+155.100664552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.412352 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:11 crc kubenswrapper[4757]: E0129 15:13:11.412762 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:11.912744205 +0000 UTC m=+155.201994442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.514166 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:11 crc kubenswrapper[4757]: E0129 15:13:11.514595 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.014575331 +0000 UTC m=+155.303825618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.552612 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:11 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:11 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:11 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.552670 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.612910 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-m9f2c" Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.615624 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:11 crc kubenswrapper[4757]: E0129 15:13:11.615971 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.115953813 +0000 UTC m=+155.405204050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.717453 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:11 crc kubenswrapper[4757]: E0129 15:13:11.717787 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.217770478 +0000 UTC m=+155.507020715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.784327 4757 csr.go:261] certificate signing request csr-dkcz2 is approved, waiting to be issued Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.816087 4757 csr.go:257] certificate signing request csr-dkcz2 is issued Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.821703 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:11 crc kubenswrapper[4757]: E0129 15:13:11.822064 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.322045327 +0000 UTC m=+155.611295564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.907795 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zrp48" event={"ID":"0b0330c1-19bb-492e-815a-2827e5749d68","Type":"ContainerStarted","Data":"9c12886aa9c1402fc791564022ef15e4f744b7897cafd4c7bc46bb3e04297d96"} Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.910567 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b7fb8938-c31a-4dba-9d00-e6b165b5ad13","Type":"ContainerStarted","Data":"92266714a59b7f6344d950f392f7da56ec30db5e95962886cd691019b893845a"} Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.924203 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:11 crc kubenswrapper[4757]: E0129 15:13:11.924665 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.424650396 +0000 UTC m=+155.713900633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.944280 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nkstc" podStartSLOduration=131.944246678 podStartE2EDuration="2m11.944246678s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:11.096072331 +0000 UTC m=+154.385322578" watchObservedRunningTime="2026-01-29 15:13:11.944246678 +0000 UTC m=+155.233496915" Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.945014 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.945008851 podStartE2EDuration="4.945008851s" podCreationTimestamp="2026-01-29 15:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:11.942612699 +0000 UTC m=+155.231862946" watchObservedRunningTime="2026-01-29 15:13:11.945008851 +0000 UTC m=+155.234259088" Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.977403 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" podStartSLOduration=131.977384449 podStartE2EDuration="2m11.977384449s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:11.973911924 +0000 UTC m=+155.263162161" watchObservedRunningTime="2026-01-29 15:13:11.977384449 +0000 UTC m=+155.266634686" Jan 29 15:13:11 crc kubenswrapper[4757]: I0129 15:13:11.991820 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dkjl5" podStartSLOduration=131.991800174 podStartE2EDuration="2m11.991800174s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:11.990166805 +0000 UTC m=+155.279417042" watchObservedRunningTime="2026-01-29 15:13:11.991800174 +0000 UTC m=+155.281050421" Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.025381 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.025544 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.525518453 +0000 UTC m=+155.814768690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.025739 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.027113 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.52710287 +0000 UTC m=+155.816353167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.127536 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.127747 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.627717569 +0000 UTC m=+155.916967806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.128060 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.128511 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.628498763 +0000 UTC m=+155.917749000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.228802 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.228987 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.728964087 +0000 UTC m=+156.018214314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.229167 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.229507 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.729493373 +0000 UTC m=+156.018743610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.330298 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.330531 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.830479003 +0000 UTC m=+156.119729260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.330588 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.331049 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.83103702 +0000 UTC m=+156.120287257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.431245 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.431464 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.931416912 +0000 UTC m=+156.220667149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.431758 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.432084 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:12.932067802 +0000 UTC m=+156.221318039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.532928 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.533083 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.033054432 +0000 UTC m=+156.322304669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.533207 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.533551 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.033539926 +0000 UTC m=+156.322790163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.551617 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:12 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:12 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:12 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.551679 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.634574 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.634770 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.134728043 +0000 UTC m=+156.423978280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.634856 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.635178 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.135167266 +0000 UTC m=+156.424417503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.736036 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.736253 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.236220778 +0000 UTC m=+156.525471015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.736322 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.736707 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.236696802 +0000 UTC m=+156.525947119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.817065 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 15:08:11 +0000 UTC, rotation deadline is 2026-10-22 16:28:18.205584836 +0000 UTC Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.817111 4757 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6385h15m5.388479865s for next certificate rotation Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.837709 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.837914 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.337879088 +0000 UTC m=+156.627129325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.838200 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.838556 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.338540868 +0000 UTC m=+156.627791105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.915532 4757 generic.go:334] "Generic (PLEG): container finished" podID="c8548b94-9099-42d5-914d-c2c10561bc5a" containerID="70445d3a6be4b1bc25e607c9d71e752774df96544d331f1b0f373c0d9ffd4967" exitCode=0 Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.915609 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" event={"ID":"c8548b94-9099-42d5-914d-c2c10561bc5a","Type":"ContainerDied","Data":"70445d3a6be4b1bc25e607c9d71e752774df96544d331f1b0f373c0d9ffd4967"} Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.938812 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.938953 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.43892461 +0000 UTC m=+156.728174857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:12 crc kubenswrapper[4757]: I0129 15:13:12.939100 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:12 crc kubenswrapper[4757]: E0129 15:13:12.939442 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.439427685 +0000 UTC m=+156.728677922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.039997 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.040227 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.540182338 +0000 UTC m=+156.829432595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.040359 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.040740 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.540727045 +0000 UTC m=+156.829977282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.141807 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.142020 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.641988523 +0000 UTC m=+156.931238760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.142078 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.142438 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.642421346 +0000 UTC m=+156.931671633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.243214 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.243378 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.743356375 +0000 UTC m=+157.032606612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.243506 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.243829 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.743817629 +0000 UTC m=+157.033067866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.345002 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.345153 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.845129769 +0000 UTC m=+157.134380006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.345210 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.345548 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.845535851 +0000 UTC m=+157.134786088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.405525 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.411619 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zzvjx" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.446367 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.446505 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.94648397 +0000 UTC m=+157.235734207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.446760 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.447011 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:13.947003426 +0000 UTC m=+157.236253663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.508666 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.509291 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.511315 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.516081 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.520287 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.547739 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.547910 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.047879972 +0000 UTC m=+157.337130209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.547986 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2714a3de-d79d-40c1-8ff1-159ec48eae49-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2714a3de-d79d-40c1-8ff1-159ec48eae49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.548053 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.548145 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2714a3de-d79d-40c1-8ff1-159ec48eae49-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2714a3de-d79d-40c1-8ff1-159ec48eae49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.548427 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.048419089 +0000 UTC m=+157.337669326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.550952 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:13 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:13 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:13 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.550998 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.649187 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.649386 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.149358257 +0000 UTC m=+157.438608484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.649512 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.649612 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2714a3de-d79d-40c1-8ff1-159ec48eae49-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2714a3de-d79d-40c1-8ff1-159ec48eae49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.649654 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2714a3de-d79d-40c1-8ff1-159ec48eae49-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2714a3de-d79d-40c1-8ff1-159ec48eae49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.649725 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2714a3de-d79d-40c1-8ff1-159ec48eae49-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2714a3de-d79d-40c1-8ff1-159ec48eae49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.649870 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.149858392 +0000 UTC m=+157.439108719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.726736 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2714a3de-d79d-40c1-8ff1-159ec48eae49-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2714a3de-d79d-40c1-8ff1-159ec48eae49\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.751059 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.751191 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.251173352 +0000 UTC m=+157.540423589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.751294 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.751673 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.251648417 +0000 UTC m=+157.540898654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.835594 4757 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hghqd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.835630 4757 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hghqd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.835655 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" podUID="9e9103bc-a2bb-4075-8454-c6f0af5c2c29" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.835686 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" podUID="9e9103bc-a2bb-4075-8454-c6f0af5c2c29" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.852724 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.852903 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.352869504 +0000 UTC m=+157.642119741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.852970 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.853301 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.353287486 +0000 UTC m=+157.642537723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.921907 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zrp48" event={"ID":"0b0330c1-19bb-492e-815a-2827e5749d68","Type":"ContainerStarted","Data":"1e66ce47f6fa199a0be8c3853d2ae38a912a254ccd5a6b454bf08d5f99f9f969"} Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.957184 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.457169563 +0000 UTC m=+157.746419800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.957111 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:13 crc kubenswrapper[4757]: I0129 15:13:13.957481 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:13 crc kubenswrapper[4757]: E0129 15:13:13.957845 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.457826403 +0000 UTC m=+157.747076640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.022052 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.058308 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:14 crc kubenswrapper[4757]: E0129 15:13:14.058503 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.558474353 +0000 UTC m=+157.847724590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.058569 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:14 crc kubenswrapper[4757]: E0129 15:13:14.058923 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.558913456 +0000 UTC m=+157.848163703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.159558 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:14 crc kubenswrapper[4757]: E0129 15:13:14.159851 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.659822954 +0000 UTC m=+157.949073191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.159933 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:14 crc kubenswrapper[4757]: E0129 15:13:14.161838 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.661819244 +0000 UTC m=+157.951069481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.261296 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:14 crc kubenswrapper[4757]: E0129 15:13:14.261632 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.761614808 +0000 UTC m=+158.050865045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.293242 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.362134 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8548b94-9099-42d5-914d-c2c10561bc5a-secret-volume\") pod \"c8548b94-9099-42d5-914d-c2c10561bc5a\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.362210 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2f42\" (UniqueName: \"kubernetes.io/projected/c8548b94-9099-42d5-914d-c2c10561bc5a-kube-api-access-j2f42\") pod \"c8548b94-9099-42d5-914d-c2c10561bc5a\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.362384 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8548b94-9099-42d5-914d-c2c10561bc5a-config-volume\") pod \"c8548b94-9099-42d5-914d-c2c10561bc5a\" (UID: \"c8548b94-9099-42d5-914d-c2c10561bc5a\") " Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.362626 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:14 crc kubenswrapper[4757]: E0129 15:13:14.362937 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.862923798 +0000 UTC m=+158.152174045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.364137 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8548b94-9099-42d5-914d-c2c10561bc5a-config-volume" (OuterVolumeSpecName: "config-volume") pod "c8548b94-9099-42d5-914d-c2c10561bc5a" (UID: "c8548b94-9099-42d5-914d-c2c10561bc5a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.374761 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8548b94-9099-42d5-914d-c2c10561bc5a-kube-api-access-j2f42" (OuterVolumeSpecName: "kube-api-access-j2f42") pod "c8548b94-9099-42d5-914d-c2c10561bc5a" (UID: "c8548b94-9099-42d5-914d-c2c10561bc5a"). InnerVolumeSpecName "kube-api-access-j2f42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.375818 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8548b94-9099-42d5-914d-c2c10561bc5a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c8548b94-9099-42d5-914d-c2c10561bc5a" (UID: "c8548b94-9099-42d5-914d-c2c10561bc5a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.463850 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.464170 4757 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8548b94-9099-42d5-914d-c2c10561bc5a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.464184 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2f42\" (UniqueName: \"kubernetes.io/projected/c8548b94-9099-42d5-914d-c2c10561bc5a-kube-api-access-j2f42\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.464193 4757 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8548b94-9099-42d5-914d-c2c10561bc5a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:14 crc kubenswrapper[4757]: E0129 15:13:14.464257 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:14.964241598 +0000 UTC m=+158.253491835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.559619 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:14 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:14 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:14 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.559924 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.566792 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:14 crc kubenswrapper[4757]: E0129 15:13:14.567101 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:15.067089504 +0000 UTC m=+158.356339741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.637875 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-m9f2c" Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.667592 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:14 crc kubenswrapper[4757]: E0129 15:13:14.668192 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:15.168165407 +0000 UTC m=+158.457415644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.748666 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.768837 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:14 crc kubenswrapper[4757]: E0129 15:13:14.769169 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:15.269156037 +0000 UTC m=+158.558406274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.870101 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:14 crc kubenswrapper[4757]: E0129 15:13:14.870459 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:15.370443426 +0000 UTC m=+158.659693663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.933418 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" event={"ID":"c8548b94-9099-42d5-914d-c2c10561bc5a","Type":"ContainerDied","Data":"5619e374f0e7cc7e27e636fdc27c10a0104a6c32f10430a60197161aad763d4e"} Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.933463 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5619e374f0e7cc7e27e636fdc27c10a0104a6c32f10430a60197161aad763d4e" Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.933527 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww" Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.950545 4757 generic.go:334] "Generic (PLEG): container finished" podID="b7fb8938-c31a-4dba-9d00-e6b165b5ad13" containerID="92266714a59b7f6344d950f392f7da56ec30db5e95962886cd691019b893845a" exitCode=0 Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.950621 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b7fb8938-c31a-4dba-9d00-e6b165b5ad13","Type":"ContainerDied","Data":"92266714a59b7f6344d950f392f7da56ec30db5e95962886cd691019b893845a"} Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.965854 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2714a3de-d79d-40c1-8ff1-159ec48eae49","Type":"ContainerStarted","Data":"260f397df1a13e9aa976fd2813cb2e8d06f9e70f65af2780b0d5c0217cbed82d"} Jan 29 15:13:14 crc kubenswrapper[4757]: I0129 15:13:14.971118 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:14 crc kubenswrapper[4757]: E0129 15:13:14.971436 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:15.471426136 +0000 UTC m=+158.760676373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.072478 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.072929 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:15.572688995 +0000 UTC m=+158.861939232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.073060 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.073379 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:15.573367665 +0000 UTC m=+158.862617902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.174731 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.175078 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:15.675051126 +0000 UTC m=+158.964301373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.275782 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.276156 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:15.776139869 +0000 UTC m=+159.065390106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.376844 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.377035 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:15.877009936 +0000 UTC m=+159.166260163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.377418 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.377822 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:15.87780265 +0000 UTC m=+159.167052887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.478019 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.478152 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:15.97813487 +0000 UTC m=+159.267385107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.478358 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.478652 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:15.978644006 +0000 UTC m=+159.267894243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.555001 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:15 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:15 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:15 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.555057 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.579680 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.579829 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:16.079804571 +0000 UTC m=+159.369054808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.580169 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.580498 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:16.080489422 +0000 UTC m=+159.369739649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.681642 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.682089 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:16.182059579 +0000 UTC m=+159.471310256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.739650 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-57qth"] Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.739893 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8548b94-9099-42d5-914d-c2c10561bc5a" containerName="collect-profiles" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.739912 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8548b94-9099-42d5-914d-c2c10561bc5a" containerName="collect-profiles" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.740033 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8548b94-9099-42d5-914d-c2c10561bc5a" containerName="collect-profiles" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.740893 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57qth" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.750640 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57qth"] Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.751006 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.782702 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4596539-1be7-44ac-8e25-3fd37c823166-catalog-content\") pod \"certified-operators-57qth\" (UID: \"d4596539-1be7-44ac-8e25-3fd37c823166\") " pod="openshift-marketplace/certified-operators-57qth" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.782962 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.783072 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg5b9\" (UniqueName: \"kubernetes.io/projected/d4596539-1be7-44ac-8e25-3fd37c823166-kube-api-access-bg5b9\") pod \"certified-operators-57qth\" (UID: \"d4596539-1be7-44ac-8e25-3fd37c823166\") " pod="openshift-marketplace/certified-operators-57qth" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.783182 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4596539-1be7-44ac-8e25-3fd37c823166-utilities\") pod \"certified-operators-57qth\" (UID: \"d4596539-1be7-44ac-8e25-3fd37c823166\") " pod="openshift-marketplace/certified-operators-57qth" Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.783350 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:16.283329888 +0000 UTC m=+159.572580125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.884599 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.884932 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4596539-1be7-44ac-8e25-3fd37c823166-utilities\") pod \"certified-operators-57qth\" (UID: \"d4596539-1be7-44ac-8e25-3fd37c823166\") " pod="openshift-marketplace/certified-operators-57qth" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.884986 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4596539-1be7-44ac-8e25-3fd37c823166-catalog-content\") pod \"certified-operators-57qth\" (UID: \"d4596539-1be7-44ac-8e25-3fd37c823166\") " pod="openshift-marketplace/certified-operators-57qth" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.885036 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg5b9\" (UniqueName: \"kubernetes.io/projected/d4596539-1be7-44ac-8e25-3fd37c823166-kube-api-access-bg5b9\") pod \"certified-operators-57qth\" (UID: \"d4596539-1be7-44ac-8e25-3fd37c823166\") " pod="openshift-marketplace/certified-operators-57qth" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.885810 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4596539-1be7-44ac-8e25-3fd37c823166-utilities\") pod \"certified-operators-57qth\" (UID: \"d4596539-1be7-44ac-8e25-3fd37c823166\") " pod="openshift-marketplace/certified-operators-57qth" Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.885897 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:16.385869965 +0000 UTC m=+159.675120212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.886205 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4596539-1be7-44ac-8e25-3fd37c823166-catalog-content\") pod \"certified-operators-57qth\" (UID: \"d4596539-1be7-44ac-8e25-3fd37c823166\") " pod="openshift-marketplace/certified-operators-57qth" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.904030 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg5b9\" (UniqueName: \"kubernetes.io/projected/d4596539-1be7-44ac-8e25-3fd37c823166-kube-api-access-bg5b9\") pod \"certified-operators-57qth\" (UID: \"d4596539-1be7-44ac-8e25-3fd37c823166\") " pod="openshift-marketplace/certified-operators-57qth" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.935694 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pxw6w"] Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.936633 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.938676 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.947199 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxw6w"] Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.986096 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjf2f\" (UniqueName: \"kubernetes.io/projected/fd7070d7-3870-49f1-8976-094ad97b6efc-kube-api-access-bjf2f\") pod \"community-operators-pxw6w\" (UID: \"fd7070d7-3870-49f1-8976-094ad97b6efc\") " pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.986149 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.986194 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd7070d7-3870-49f1-8976-094ad97b6efc-utilities\") pod \"community-operators-pxw6w\" (UID: \"fd7070d7-3870-49f1-8976-094ad97b6efc\") " pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:13:15 crc kubenswrapper[4757]: I0129 15:13:15.986321 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd7070d7-3870-49f1-8976-094ad97b6efc-catalog-content\") pod \"community-operators-pxw6w\" (UID: \"fd7070d7-3870-49f1-8976-094ad97b6efc\") " pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:13:15 crc kubenswrapper[4757]: E0129 15:13:15.986553 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:16.486535465 +0000 UTC m=+159.775785782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.059592 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57qth" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.088708 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.088922 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd7070d7-3870-49f1-8976-094ad97b6efc-utilities\") pod \"community-operators-pxw6w\" (UID: \"fd7070d7-3870-49f1-8976-094ad97b6efc\") " pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.088959 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd7070d7-3870-49f1-8976-094ad97b6efc-catalog-content\") pod \"community-operators-pxw6w\" (UID: \"fd7070d7-3870-49f1-8976-094ad97b6efc\") " pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.089018 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjf2f\" (UniqueName: \"kubernetes.io/projected/fd7070d7-3870-49f1-8976-094ad97b6efc-kube-api-access-bjf2f\") pod \"community-operators-pxw6w\" (UID: \"fd7070d7-3870-49f1-8976-094ad97b6efc\") " pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.089380 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:16.589364711 +0000 UTC m=+159.878614948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.095420 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd7070d7-3870-49f1-8976-094ad97b6efc-utilities\") pod \"community-operators-pxw6w\" (UID: \"fd7070d7-3870-49f1-8976-094ad97b6efc\") " pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.096254 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd7070d7-3870-49f1-8976-094ad97b6efc-catalog-content\") pod \"community-operators-pxw6w\" (UID: \"fd7070d7-3870-49f1-8976-094ad97b6efc\") " pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.137658 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjf2f\" (UniqueName: \"kubernetes.io/projected/fd7070d7-3870-49f1-8976-094ad97b6efc-kube-api-access-bjf2f\") pod \"community-operators-pxw6w\" (UID: \"fd7070d7-3870-49f1-8976-094ad97b6efc\") " pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.151782 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2jc8z"] Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.153049 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jc8z" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.154222 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2jc8z"] Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.193211 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43de85f7-11df-4e6f-8d3f-b982b03ce802-utilities\") pod \"certified-operators-2jc8z\" (UID: \"43de85f7-11df-4e6f-8d3f-b982b03ce802\") " pod="openshift-marketplace/certified-operators-2jc8z" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.193258 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.193366 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43de85f7-11df-4e6f-8d3f-b982b03ce802-catalog-content\") pod \"certified-operators-2jc8z\" (UID: \"43de85f7-11df-4e6f-8d3f-b982b03ce802\") " pod="openshift-marketplace/certified-operators-2jc8z" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.193396 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm2n7\" (UniqueName: \"kubernetes.io/projected/43de85f7-11df-4e6f-8d3f-b982b03ce802-kube-api-access-rm2n7\") pod \"certified-operators-2jc8z\" (UID: \"43de85f7-11df-4e6f-8d3f-b982b03ce802\") " pod="openshift-marketplace/certified-operators-2jc8z" Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.193712 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:16.693698902 +0000 UTC m=+159.982949139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.227702 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.257660 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.293837 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7fb8938-c31a-4dba-9d00-e6b165b5ad13-kube-api-access\") pod \"b7fb8938-c31a-4dba-9d00-e6b165b5ad13\" (UID: \"b7fb8938-c31a-4dba-9d00-e6b165b5ad13\") " Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.294239 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7fb8938-c31a-4dba-9d00-e6b165b5ad13-kubelet-dir\") pod \"b7fb8938-c31a-4dba-9d00-e6b165b5ad13\" (UID: \"b7fb8938-c31a-4dba-9d00-e6b165b5ad13\") " Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.294457 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.294712 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43de85f7-11df-4e6f-8d3f-b982b03ce802-catalog-content\") pod \"certified-operators-2jc8z\" (UID: \"43de85f7-11df-4e6f-8d3f-b982b03ce802\") " pod="openshift-marketplace/certified-operators-2jc8z" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.294747 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm2n7\" (UniqueName: \"kubernetes.io/projected/43de85f7-11df-4e6f-8d3f-b982b03ce802-kube-api-access-rm2n7\") pod \"certified-operators-2jc8z\" (UID: \"43de85f7-11df-4e6f-8d3f-b982b03ce802\") " pod="openshift-marketplace/certified-operators-2jc8z" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.294808 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43de85f7-11df-4e6f-8d3f-b982b03ce802-utilities\") pod \"certified-operators-2jc8z\" (UID: \"43de85f7-11df-4e6f-8d3f-b982b03ce802\") " pod="openshift-marketplace/certified-operators-2jc8z" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.295618 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43de85f7-11df-4e6f-8d3f-b982b03ce802-utilities\") pod \"certified-operators-2jc8z\" (UID: \"43de85f7-11df-4e6f-8d3f-b982b03ce802\") " pod="openshift-marketplace/certified-operators-2jc8z" Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.298462 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:16.798420755 +0000 UTC m=+160.087670992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.298494 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7fb8938-c31a-4dba-9d00-e6b165b5ad13-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b7fb8938-c31a-4dba-9d00-e6b165b5ad13" (UID: "b7fb8938-c31a-4dba-9d00-e6b165b5ad13"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.298907 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43de85f7-11df-4e6f-8d3f-b982b03ce802-catalog-content\") pod \"certified-operators-2jc8z\" (UID: \"43de85f7-11df-4e6f-8d3f-b982b03ce802\") " pod="openshift-marketplace/certified-operators-2jc8z" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.329037 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7fb8938-c31a-4dba-9d00-e6b165b5ad13-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b7fb8938-c31a-4dba-9d00-e6b165b5ad13" (UID: "b7fb8938-c31a-4dba-9d00-e6b165b5ad13"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.341178 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm2n7\" (UniqueName: \"kubernetes.io/projected/43de85f7-11df-4e6f-8d3f-b982b03ce802-kube-api-access-rm2n7\") pod \"certified-operators-2jc8z\" (UID: \"43de85f7-11df-4e6f-8d3f-b982b03ce802\") " pod="openshift-marketplace/certified-operators-2jc8z" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.351492 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c5pw7"] Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.352340 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7fb8938-c31a-4dba-9d00-e6b165b5ad13" containerName="pruner" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.352435 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7fb8938-c31a-4dba-9d00-e6b165b5ad13" containerName="pruner" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.352616 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7fb8938-c31a-4dba-9d00-e6b165b5ad13" containerName="pruner" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.353312 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.382708 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c5pw7"] Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.401365 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e10b6b9-259a-417c-ba5d-311e75543637-catalog-content\") pod \"community-operators-c5pw7\" (UID: \"4e10b6b9-259a-417c-ba5d-311e75543637\") " pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.401593 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e10b6b9-259a-417c-ba5d-311e75543637-utilities\") pod \"community-operators-c5pw7\" (UID: \"4e10b6b9-259a-417c-ba5d-311e75543637\") " pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.401695 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.401777 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8p6x\" (UniqueName: \"kubernetes.io/projected/4e10b6b9-259a-417c-ba5d-311e75543637-kube-api-access-d8p6x\") pod \"community-operators-c5pw7\" (UID: \"4e10b6b9-259a-417c-ba5d-311e75543637\") " pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.401867 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7fb8938-c31a-4dba-9d00-e6b165b5ad13-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.401951 4757 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7fb8938-c31a-4dba-9d00-e6b165b5ad13-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.402296 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:16.902282252 +0000 UTC m=+160.191532489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.487693 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jc8z" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.502875 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.503070 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.003037845 +0000 UTC m=+160.292288082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.503195 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.503227 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8p6x\" (UniqueName: \"kubernetes.io/projected/4e10b6b9-259a-417c-ba5d-311e75543637-kube-api-access-d8p6x\") pod \"community-operators-c5pw7\" (UID: \"4e10b6b9-259a-417c-ba5d-311e75543637\") " pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.503337 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e10b6b9-259a-417c-ba5d-311e75543637-catalog-content\") pod \"community-operators-c5pw7\" (UID: \"4e10b6b9-259a-417c-ba5d-311e75543637\") " pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.503574 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.003564081 +0000 UTC m=+160.292814318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.503659 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e10b6b9-259a-417c-ba5d-311e75543637-utilities\") pod \"community-operators-c5pw7\" (UID: \"4e10b6b9-259a-417c-ba5d-311e75543637\") " pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.504048 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e10b6b9-259a-417c-ba5d-311e75543637-catalog-content\") pod \"community-operators-c5pw7\" (UID: \"4e10b6b9-259a-417c-ba5d-311e75543637\") " pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.504110 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e10b6b9-259a-417c-ba5d-311e75543637-utilities\") pod \"community-operators-c5pw7\" (UID: \"4e10b6b9-259a-417c-ba5d-311e75543637\") " pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.509158 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57qth"] Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.529552 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8p6x\" (UniqueName: \"kubernetes.io/projected/4e10b6b9-259a-417c-ba5d-311e75543637-kube-api-access-d8p6x\") pod \"community-operators-c5pw7\" (UID: \"4e10b6b9-259a-417c-ba5d-311e75543637\") " pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.560101 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:16 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:16 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:16 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.560149 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.606937 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.607110 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.107076578 +0000 UTC m=+160.396326815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.607137 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.607521 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.10749398 +0000 UTC m=+160.396744217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.685610 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.707925 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.708153 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.20812783 +0000 UTC m=+160.497378057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.708322 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.708646 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.208632625 +0000 UTC m=+160.497882862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.743981 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2jc8z"] Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.788471 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxw6w"] Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.809364 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.809567 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.309550533 +0000 UTC m=+160.598800770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.809659 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.809968 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.309948505 +0000 UTC m=+160.599198742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.840599 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hghqd" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.910513 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.910642 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.410614965 +0000 UTC m=+160.699865202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.910792 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:16 crc kubenswrapper[4757]: E0129 15:13:16.911075 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.411063979 +0000 UTC m=+160.700314216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.912644 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c5pw7"] Jan 29 15:13:16 crc kubenswrapper[4757]: W0129 15:13:16.922891 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43de85f7_11df_4e6f_8d3f_b982b03ce802.slice/crio-97f906717ccc198996afc42c311011162821b18e40086373a8ba66c14501406f WatchSource:0}: Error finding container 97f906717ccc198996afc42c311011162821b18e40086373a8ba66c14501406f: Status 404 returned error can't find the container with id 97f906717ccc198996afc42c311011162821b18e40086373a8ba66c14501406f Jan 29 15:13:16 crc kubenswrapper[4757]: W0129 15:13:16.974130 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e10b6b9_259a_417c_ba5d_311e75543637.slice/crio-6a6dbacad046d1e4bfe9f9815d1b409eeee9beb1230dbcbfea6a464f685c534f WatchSource:0}: Error finding container 6a6dbacad046d1e4bfe9f9815d1b409eeee9beb1230dbcbfea6a464f685c534f: Status 404 returned error can't find the container with id 6a6dbacad046d1e4bfe9f9815d1b409eeee9beb1230dbcbfea6a464f685c534f Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.976652 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jc8z" event={"ID":"43de85f7-11df-4e6f-8d3f-b982b03ce802","Type":"ContainerStarted","Data":"97f906717ccc198996afc42c311011162821b18e40086373a8ba66c14501406f"} Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.978831 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxw6w" event={"ID":"fd7070d7-3870-49f1-8976-094ad97b6efc","Type":"ContainerStarted","Data":"6cb827fcca3a5433ae5e995c9d5cdd2fe816c9d942530449f1665b3241ccdd17"} Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.990401 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b7fb8938-c31a-4dba-9d00-e6b165b5ad13","Type":"ContainerDied","Data":"c15707bbed434a047efab7344bdea115c36b839106d53f0ced94f5a6be8040d4"} Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.990438 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c15707bbed434a047efab7344bdea115c36b839106d53f0ced94f5a6be8040d4" Jan 29 15:13:16 crc kubenswrapper[4757]: I0129 15:13:16.990485 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.001469 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57qth" event={"ID":"d4596539-1be7-44ac-8e25-3fd37c823166","Type":"ContainerStarted","Data":"545b652b71fadc301ba075abd413458ac6e02b209f5c18d95f991e4f37186346"} Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.013714 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.013930 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.513895804 +0000 UTC m=+160.803146041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.014059 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.014439 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.51442542 +0000 UTC m=+160.803675657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.115000 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.115163 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.615137772 +0000 UTC m=+160.904388009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.115432 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.115734 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.61572492 +0000 UTC m=+160.904975157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.216392 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.216576 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.716549955 +0000 UTC m=+161.005800192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.216763 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.217120 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.717088042 +0000 UTC m=+161.006338279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.317487 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.317698 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.817671049 +0000 UTC m=+161.106921286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.317834 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.318212 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.818197725 +0000 UTC m=+161.107447962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.418674 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.418889 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:17.918865656 +0000 UTC m=+161.208115893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.520519 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.520852 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:18.020837395 +0000 UTC m=+161.310087632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.552612 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:17 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:17 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:17 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.552674 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.604946 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.605003 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.621343 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.621614 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:18.121600828 +0000 UTC m=+161.410851065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.722684 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.723068 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:18.223049412 +0000 UTC m=+161.512299649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.823255 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.823646 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:18.3236322 +0000 UTC m=+161.612882437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.925115 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:17 crc kubenswrapper[4757]: E0129 15:13:17.925515 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:18.425499777 +0000 UTC m=+161.714750024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.943032 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-btp4k"] Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.944281 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.948304 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:13:17 crc kubenswrapper[4757]: I0129 15:13:17.957399 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btp4k"] Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.006335 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pw7" event={"ID":"4e10b6b9-259a-417c-ba5d-311e75543637","Type":"ContainerStarted","Data":"6a6dbacad046d1e4bfe9f9815d1b409eeee9beb1230dbcbfea6a464f685c534f"} Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.007612 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2714a3de-d79d-40c1-8ff1-159ec48eae49","Type":"ContainerStarted","Data":"2348770a5aadba0d3f2d4daa9cc53ee3b5b4c01b4a7f3dd2884f1245bd65c5e5"} Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.025881 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.026014 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:18.525985741 +0000 UTC m=+161.815235988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.026047 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvvcl\" (UniqueName: \"kubernetes.io/projected/f2342b27-9060-4697-a957-65d07f099e82-kube-api-access-tvvcl\") pod \"redhat-marketplace-btp4k\" (UID: \"f2342b27-9060-4697-a957-65d07f099e82\") " pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.026073 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2342b27-9060-4697-a957-65d07f099e82-catalog-content\") pod \"redhat-marketplace-btp4k\" (UID: \"f2342b27-9060-4697-a957-65d07f099e82\") " pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.026116 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.026136 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2342b27-9060-4697-a957-65d07f099e82-utilities\") pod \"redhat-marketplace-btp4k\" (UID: \"f2342b27-9060-4697-a957-65d07f099e82\") " pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.026296 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-zrp48" podStartSLOduration=139.02627958 podStartE2EDuration="2m19.02627958s" podCreationTimestamp="2026-01-29 15:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:18.025376113 +0000 UTC m=+161.314626370" watchObservedRunningTime="2026-01-29 15:13:18.02627958 +0000 UTC m=+161.315529827" Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.026434 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:18.526422015 +0000 UTC m=+161.815672252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.126930 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.127141 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:18.627108376 +0000 UTC m=+161.916358623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.127501 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvvcl\" (UniqueName: \"kubernetes.io/projected/f2342b27-9060-4697-a957-65d07f099e82-kube-api-access-tvvcl\") pod \"redhat-marketplace-btp4k\" (UID: \"f2342b27-9060-4697-a957-65d07f099e82\") " pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.127557 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2342b27-9060-4697-a957-65d07f099e82-catalog-content\") pod \"redhat-marketplace-btp4k\" (UID: \"f2342b27-9060-4697-a957-65d07f099e82\") " pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.127674 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.127719 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2342b27-9060-4697-a957-65d07f099e82-utilities\") pod \"redhat-marketplace-btp4k\" (UID: \"f2342b27-9060-4697-a957-65d07f099e82\") " pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.128163 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2342b27-9060-4697-a957-65d07f099e82-catalog-content\") pod \"redhat-marketplace-btp4k\" (UID: \"f2342b27-9060-4697-a957-65d07f099e82\") " pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.128674 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2342b27-9060-4697-a957-65d07f099e82-utilities\") pod \"redhat-marketplace-btp4k\" (UID: \"f2342b27-9060-4697-a957-65d07f099e82\") " pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.128724 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:18.628709394 +0000 UTC m=+161.917959641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.157549 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvvcl\" (UniqueName: \"kubernetes.io/projected/f2342b27-9060-4697-a957-65d07f099e82-kube-api-access-tvvcl\") pod \"redhat-marketplace-btp4k\" (UID: \"f2342b27-9060-4697-a957-65d07f099e82\") " pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.228589 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.228746 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:18.728725135 +0000 UTC m=+162.017975372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.228857 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.229164 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:18.729152618 +0000 UTC m=+162.018402855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.262483 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.329875 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.330209 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:18.830184499 +0000 UTC m=+162.119434736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.339184 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jhlrf"] Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.340082 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jhlrf" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.357473 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhlrf"] Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.431239 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92724a14-21db-441f-b509-142dc0a8dc15-catalog-content\") pod \"redhat-marketplace-jhlrf\" (UID: \"92724a14-21db-441f-b509-142dc0a8dc15\") " pod="openshift-marketplace/redhat-marketplace-jhlrf" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.431631 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.431677 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92724a14-21db-441f-b509-142dc0a8dc15-utilities\") pod \"redhat-marketplace-jhlrf\" (UID: \"92724a14-21db-441f-b509-142dc0a8dc15\") " pod="openshift-marketplace/redhat-marketplace-jhlrf" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.431700 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgwj9\" (UniqueName: \"kubernetes.io/projected/92724a14-21db-441f-b509-142dc0a8dc15-kube-api-access-xgwj9\") pod \"redhat-marketplace-jhlrf\" (UID: \"92724a14-21db-441f-b509-142dc0a8dc15\") " pod="openshift-marketplace/redhat-marketplace-jhlrf" Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.432038 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:18.932026225 +0000 UTC m=+162.221276462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.533133 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.533330 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92724a14-21db-441f-b509-142dc0a8dc15-catalog-content\") pod \"redhat-marketplace-jhlrf\" (UID: \"92724a14-21db-441f-b509-142dc0a8dc15\") " pod="openshift-marketplace/redhat-marketplace-jhlrf" Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.533643 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:19.033610213 +0000 UTC m=+162.322860460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.533711 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92724a14-21db-441f-b509-142dc0a8dc15-utilities\") pod \"redhat-marketplace-jhlrf\" (UID: \"92724a14-21db-441f-b509-142dc0a8dc15\") " pod="openshift-marketplace/redhat-marketplace-jhlrf" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.533777 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgwj9\" (UniqueName: \"kubernetes.io/projected/92724a14-21db-441f-b509-142dc0a8dc15-kube-api-access-xgwj9\") pod \"redhat-marketplace-jhlrf\" (UID: \"92724a14-21db-441f-b509-142dc0a8dc15\") " pod="openshift-marketplace/redhat-marketplace-jhlrf" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.533788 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92724a14-21db-441f-b509-142dc0a8dc15-catalog-content\") pod \"redhat-marketplace-jhlrf\" (UID: \"92724a14-21db-441f-b509-142dc0a8dc15\") " pod="openshift-marketplace/redhat-marketplace-jhlrf" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.534075 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92724a14-21db-441f-b509-142dc0a8dc15-utilities\") pod \"redhat-marketplace-jhlrf\" (UID: \"92724a14-21db-441f-b509-142dc0a8dc15\") " pod="openshift-marketplace/redhat-marketplace-jhlrf" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.541688 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-btp4k"] Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.553072 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:18 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:18 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:18 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.553126 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.566135 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgwj9\" (UniqueName: \"kubernetes.io/projected/92724a14-21db-441f-b509-142dc0a8dc15-kube-api-access-xgwj9\") pod \"redhat-marketplace-jhlrf\" (UID: \"92724a14-21db-441f-b509-142dc0a8dc15\") " pod="openshift-marketplace/redhat-marketplace-jhlrf" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.635371 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.635681 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:19.135663915 +0000 UTC m=+162.424914152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.656020 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jhlrf" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.736234 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.736667 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:19.236648815 +0000 UTC m=+162.525899052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.840226 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.843687 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:19.343665868 +0000 UTC m=+162.632916105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.888642 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.903199 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhlrf"] Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.943899 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.944060 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:19.444032079 +0000 UTC m=+162.733282316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.944430 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:18 crc kubenswrapper[4757]: E0129 15:13:18.945183 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:19.445171113 +0000 UTC m=+162.734421340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.947634 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v8v75"] Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.948815 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8v75" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.958574 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:13:18 crc kubenswrapper[4757]: I0129 15:13:18.977555 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8v75"] Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.021885 4757 generic.go:334] "Generic (PLEG): container finished" podID="f2342b27-9060-4697-a957-65d07f099e82" containerID="f45aef4d53e4d1f03def42bdc1c7a05993e482bf150c45599984f1f9238829bc" exitCode=0 Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.021958 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btp4k" event={"ID":"f2342b27-9060-4697-a957-65d07f099e82","Type":"ContainerDied","Data":"f45aef4d53e4d1f03def42bdc1c7a05993e482bf150c45599984f1f9238829bc"} Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.022015 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btp4k" event={"ID":"f2342b27-9060-4697-a957-65d07f099e82","Type":"ContainerStarted","Data":"6af8def668221c99e21640b19c2ee6a6757b80a824e97d432b5d0e68881578fe"} Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.023624 4757 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.042231 4757 generic.go:334] "Generic (PLEG): container finished" podID="d4596539-1be7-44ac-8e25-3fd37c823166" containerID="3e805b09d3de9c949b272e067a10b865f0b9768207ea43831a603c192f2abb2f" exitCode=0 Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.042639 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57qth" event={"ID":"d4596539-1be7-44ac-8e25-3fd37c823166","Type":"ContainerDied","Data":"3e805b09d3de9c949b272e067a10b865f0b9768207ea43831a603c192f2abb2f"} Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.048934 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:19 crc kubenswrapper[4757]: E0129 15:13:19.049237 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:19.549212226 +0000 UTC m=+162.838462463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.050761 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce413ab-1d96-4e66-b700-db27f6b52966-utilities\") pod \"redhat-operators-v8v75\" (UID: \"bce413ab-1d96-4e66-b700-db27f6b52966\") " pod="openshift-marketplace/redhat-operators-v8v75" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.050854 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swndd\" (UniqueName: \"kubernetes.io/projected/bce413ab-1d96-4e66-b700-db27f6b52966-kube-api-access-swndd\") pod \"redhat-operators-v8v75\" (UID: \"bce413ab-1d96-4e66-b700-db27f6b52966\") " pod="openshift-marketplace/redhat-operators-v8v75" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.050927 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.050998 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce413ab-1d96-4e66-b700-db27f6b52966-catalog-content\") pod \"redhat-operators-v8v75\" (UID: \"bce413ab-1d96-4e66-b700-db27f6b52966\") " pod="openshift-marketplace/redhat-operators-v8v75" Jan 29 15:13:19 crc kubenswrapper[4757]: E0129 15:13:19.051383 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:19.551369881 +0000 UTC m=+162.840620118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.061705 4757 generic.go:334] "Generic (PLEG): container finished" podID="4e10b6b9-259a-417c-ba5d-311e75543637" containerID="33eb487cf9d4a6747b5b8e508373ba7db0db7d9788634cd1c52c29cae619e103" exitCode=0 Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.061790 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pw7" event={"ID":"4e10b6b9-259a-417c-ba5d-311e75543637","Type":"ContainerDied","Data":"33eb487cf9d4a6747b5b8e508373ba7db0db7d9788634cd1c52c29cae619e103"} Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.074722 4757 generic.go:334] "Generic (PLEG): container finished" podID="2714a3de-d79d-40c1-8ff1-159ec48eae49" containerID="2348770a5aadba0d3f2d4daa9cc53ee3b5b4c01b4a7f3dd2884f1245bd65c5e5" exitCode=0 Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.075002 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2714a3de-d79d-40c1-8ff1-159ec48eae49","Type":"ContainerDied","Data":"2348770a5aadba0d3f2d4daa9cc53ee3b5b4c01b4a7f3dd2884f1245bd65c5e5"} Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.079146 4757 generic.go:334] "Generic (PLEG): container finished" podID="43de85f7-11df-4e6f-8d3f-b982b03ce802" containerID="8d6494d78f9cab25462f6121d0f17feaa1af864e8d14a5012e34844ec8237c36" exitCode=0 Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.079259 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jc8z" event={"ID":"43de85f7-11df-4e6f-8d3f-b982b03ce802","Type":"ContainerDied","Data":"8d6494d78f9cab25462f6121d0f17feaa1af864e8d14a5012e34844ec8237c36"} Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.086030 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhlrf" event={"ID":"92724a14-21db-441f-b509-142dc0a8dc15","Type":"ContainerStarted","Data":"3e26fb4ca8785c9e78aec3ebfa31ae396e19431c9b52a2d84822aac52d255153"} Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.087597 4757 generic.go:334] "Generic (PLEG): container finished" podID="fd7070d7-3870-49f1-8976-094ad97b6efc" containerID="3353db46eb4906dd27361821b7dced7ea3843529d6a0d93475705822a970588e" exitCode=0 Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.087630 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxw6w" event={"ID":"fd7070d7-3870-49f1-8976-094ad97b6efc","Type":"ContainerDied","Data":"3353db46eb4906dd27361821b7dced7ea3843529d6a0d93475705822a970588e"} Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.093433 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wnhtd" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.152690 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.152871 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce413ab-1d96-4e66-b700-db27f6b52966-catalog-content\") pod \"redhat-operators-v8v75\" (UID: \"bce413ab-1d96-4e66-b700-db27f6b52966\") " pod="openshift-marketplace/redhat-operators-v8v75" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.152981 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce413ab-1d96-4e66-b700-db27f6b52966-utilities\") pod \"redhat-operators-v8v75\" (UID: \"bce413ab-1d96-4e66-b700-db27f6b52966\") " pod="openshift-marketplace/redhat-operators-v8v75" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.153009 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swndd\" (UniqueName: \"kubernetes.io/projected/bce413ab-1d96-4e66-b700-db27f6b52966-kube-api-access-swndd\") pod \"redhat-operators-v8v75\" (UID: \"bce413ab-1d96-4e66-b700-db27f6b52966\") " pod="openshift-marketplace/redhat-operators-v8v75" Jan 29 15:13:19 crc kubenswrapper[4757]: E0129 15:13:19.153468 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:19.653415423 +0000 UTC m=+162.942665670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.154155 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce413ab-1d96-4e66-b700-db27f6b52966-catalog-content\") pod \"redhat-operators-v8v75\" (UID: \"bce413ab-1d96-4e66-b700-db27f6b52966\") " pod="openshift-marketplace/redhat-operators-v8v75" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.154476 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce413ab-1d96-4e66-b700-db27f6b52966-utilities\") pod \"redhat-operators-v8v75\" (UID: \"bce413ab-1d96-4e66-b700-db27f6b52966\") " pod="openshift-marketplace/redhat-operators-v8v75" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.176157 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swndd\" (UniqueName: \"kubernetes.io/projected/bce413ab-1d96-4e66-b700-db27f6b52966-kube-api-access-swndd\") pod \"redhat-operators-v8v75\" (UID: \"bce413ab-1d96-4e66-b700-db27f6b52966\") " pod="openshift-marketplace/redhat-operators-v8v75" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.254080 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:19 crc kubenswrapper[4757]: E0129 15:13:19.254475 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:19.754462425 +0000 UTC m=+163.043712662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.283136 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n44qs" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.336009 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.336143 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.341646 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8v75" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.342889 4757 patch_prober.go:28] interesting pod/apiserver-76f77b778f-zrp48 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.17:8443/livez\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.342949 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-zrp48" podUID="0b0330c1-19bb-492e-815a-2827e5749d68" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.17:8443/livez\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.357848 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:19 crc kubenswrapper[4757]: E0129 15:13:19.358536 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:19.858521298 +0000 UTC m=+163.147771535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.367486 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-99p4m"] Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.368671 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99p4m" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.425553 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.464770 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.465478 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-catalog-content\") pod \"redhat-operators-99p4m\" (UID: \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\") " pod="openshift-marketplace/redhat-operators-99p4m" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.465577 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr9fs\" (UniqueName: \"kubernetes.io/projected/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-kube-api-access-rr9fs\") pod \"redhat-operators-99p4m\" (UID: \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\") " pod="openshift-marketplace/redhat-operators-99p4m" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.465594 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-utilities\") pod \"redhat-operators-99p4m\" (UID: \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\") " pod="openshift-marketplace/redhat-operators-99p4m" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.465648 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:19 crc kubenswrapper[4757]: E0129 15:13:19.467800 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:19.967790188 +0000 UTC m=+163.257040425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.469087 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99p4m"] Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.542121 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-dz9cf" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.552733 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:19 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:19 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:19 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.552776 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.565147 4757 patch_prober.go:28] interesting pod/console-f9d7485db-skxmw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.565213 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-skxmw" podUID="a0f71154-b1ff-4e61-9c93-8bcb95678bce" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.568891 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.569234 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-catalog-content\") pod \"redhat-operators-99p4m\" (UID: \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\") " pod="openshift-marketplace/redhat-operators-99p4m" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.569425 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr9fs\" (UniqueName: \"kubernetes.io/projected/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-kube-api-access-rr9fs\") pod \"redhat-operators-99p4m\" (UID: \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\") " pod="openshift-marketplace/redhat-operators-99p4m" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.569482 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-utilities\") pod \"redhat-operators-99p4m\" (UID: \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\") " pod="openshift-marketplace/redhat-operators-99p4m" Jan 29 15:13:19 crc kubenswrapper[4757]: E0129 15:13:19.569751 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:20.069730707 +0000 UTC m=+163.358980944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.570181 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-catalog-content\") pod \"redhat-operators-99p4m\" (UID: \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\") " pod="openshift-marketplace/redhat-operators-99p4m" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.570780 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-utilities\") pod \"redhat-operators-99p4m\" (UID: \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\") " pod="openshift-marketplace/redhat-operators-99p4m" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.581853 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.581903 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.581930 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.581982 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.637794 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr9fs\" (UniqueName: \"kubernetes.io/projected/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-kube-api-access-rr9fs\") pod \"redhat-operators-99p4m\" (UID: \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\") " pod="openshift-marketplace/redhat-operators-99p4m" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.684169 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:19 crc kubenswrapper[4757]: E0129 15:13:19.685478 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:20.185466082 +0000 UTC m=+163.474716319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.712594 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.722117 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99p4m" Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.784821 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:19 crc kubenswrapper[4757]: E0129 15:13:19.785878 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:20.285858055 +0000 UTC m=+163.575108282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.789578 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8v75"] Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.887238 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:19 crc kubenswrapper[4757]: E0129 15:13:19.889641 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:20.389626419 +0000 UTC m=+163.678876656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:19 crc kubenswrapper[4757]: I0129 15:13:19.989636 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:19 crc kubenswrapper[4757]: E0129 15:13:19.990024 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:20.49000608 +0000 UTC m=+163.779256327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.091093 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:20 crc kubenswrapper[4757]: E0129 15:13:20.091506 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:20.591494016 +0000 UTC m=+163.880744253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.100963 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8v75" event={"ID":"bce413ab-1d96-4e66-b700-db27f6b52966","Type":"ContainerStarted","Data":"5f86ecd2623087577a1b8efa95f81ee47eab70a0be84f35e4665c3221ee72f28"} Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.103089 4757 generic.go:334] "Generic (PLEG): container finished" podID="92724a14-21db-441f-b509-142dc0a8dc15" containerID="6c94b05528981f308a94e9f9d0cabd7e7d973e273b04ee0a2602a75af66511da" exitCode=0 Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.103973 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhlrf" event={"ID":"92724a14-21db-441f-b509-142dc0a8dc15","Type":"ContainerDied","Data":"6c94b05528981f308a94e9f9d0cabd7e7d973e273b04ee0a2602a75af66511da"} Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.137785 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99p4m"] Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.192122 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:20 crc kubenswrapper[4757]: E0129 15:13:20.192962 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:20.69294247 +0000 UTC m=+163.982192707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.294200 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:20 crc kubenswrapper[4757]: E0129 15:13:20.294670 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:20.794632441 +0000 UTC m=+164.083882738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.399221 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:20 crc kubenswrapper[4757]: E0129 15:13:20.399700 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:20.899681294 +0000 UTC m=+164.188931541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.452709 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.500053 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2714a3de-d79d-40c1-8ff1-159ec48eae49-kubelet-dir\") pod \"2714a3de-d79d-40c1-8ff1-159ec48eae49\" (UID: \"2714a3de-d79d-40c1-8ff1-159ec48eae49\") " Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.500136 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2714a3de-d79d-40c1-8ff1-159ec48eae49-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2714a3de-d79d-40c1-8ff1-159ec48eae49" (UID: "2714a3de-d79d-40c1-8ff1-159ec48eae49"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.500237 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2714a3de-d79d-40c1-8ff1-159ec48eae49-kube-api-access\") pod \"2714a3de-d79d-40c1-8ff1-159ec48eae49\" (UID: \"2714a3de-d79d-40c1-8ff1-159ec48eae49\") " Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.500811 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.500956 4757 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2714a3de-d79d-40c1-8ff1-159ec48eae49-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:20 crc kubenswrapper[4757]: E0129 15:13:20.501129 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.001116687 +0000 UTC m=+164.290366924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.505875 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2714a3de-d79d-40c1-8ff1-159ec48eae49-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2714a3de-d79d-40c1-8ff1-159ec48eae49" (UID: "2714a3de-d79d-40c1-8ff1-159ec48eae49"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.561188 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:20 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:20 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:20 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.561247 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.601777 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:20 crc kubenswrapper[4757]: E0129 15:13:20.601908 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.101884701 +0000 UTC m=+164.391134948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.602118 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.602258 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2714a3de-d79d-40c1-8ff1-159ec48eae49-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:20 crc kubenswrapper[4757]: E0129 15:13:20.602501 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.102490459 +0000 UTC m=+164.391740696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.620644 4757 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.702927 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:20 crc kubenswrapper[4757]: E0129 15:13:20.703141 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.203115958 +0000 UTC m=+164.492366195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.703363 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:20 crc kubenswrapper[4757]: E0129 15:13:20.703697 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.203684195 +0000 UTC m=+164.492934432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.804038 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:20 crc kubenswrapper[4757]: E0129 15:13:20.804515 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.304409238 +0000 UTC m=+164.593659475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.874181 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8tbgk"] Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.874431 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" podUID="42aab7ad-1293-4b39-8199-0b7f944a8f31" containerName="controller-manager" containerID="cri-o://5b66943fc40caea793124ece65bb5ece104197c4395d6dd1033077c1c2ad594d" gracePeriod=30 Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.881802 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m"] Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.881998 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" podUID="dacc418b-f809-4317-9526-08c5781c6f68" containerName="route-controller-manager" containerID="cri-o://782c17b0ca95e95c1dfbc7c966fba7678ba47041f0793d682790c816c8351bde" gracePeriod=30 Jan 29 15:13:20 crc kubenswrapper[4757]: I0129 15:13:20.905791 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:20 crc kubenswrapper[4757]: E0129 15:13:20.906089 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.406077548 +0000 UTC m=+164.695327785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.006997 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:21 crc kubenswrapper[4757]: E0129 15:13:21.007342 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.507327536 +0000 UTC m=+164.796577763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.108511 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:21 crc kubenswrapper[4757]: E0129 15:13:21.108829 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.608816921 +0000 UTC m=+164.898067158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.118500 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" event={"ID":"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2","Type":"ContainerStarted","Data":"1aac3a1228c17277de7003cb3d1f41d2dd2c1f7d5e77b4b9578bd066bece10c2"} Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.118538 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" event={"ID":"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2","Type":"ContainerStarted","Data":"dea64418cc808d8bc6c7c0da812069211101e0f2f59508b0b1e7965e5fb9e70d"} Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.120748 4757 generic.go:334] "Generic (PLEG): container finished" podID="bce413ab-1d96-4e66-b700-db27f6b52966" containerID="c0b90e7ea5d158a9744e68e1cf966de7415e79fa91d0f42bc8fbb5161e0bf23f" exitCode=0 Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.120786 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8v75" event={"ID":"bce413ab-1d96-4e66-b700-db27f6b52966","Type":"ContainerDied","Data":"c0b90e7ea5d158a9744e68e1cf966de7415e79fa91d0f42bc8fbb5161e0bf23f"} Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.152865 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2714a3de-d79d-40c1-8ff1-159ec48eae49","Type":"ContainerDied","Data":"260f397df1a13e9aa976fd2813cb2e8d06f9e70f65af2780b0d5c0217cbed82d"} Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.152906 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="260f397df1a13e9aa976fd2813cb2e8d06f9e70f65af2780b0d5c0217cbed82d" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.152963 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.184946 4757 generic.go:334] "Generic (PLEG): container finished" podID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" containerID="fd2fc4641f6c3054a6b7505ab31e538096b06ac9dc4fb098aac3b7db7eb3a088" exitCode=0 Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.185045 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99p4m" event={"ID":"6f40510d-f93a-4a84-ad4a-e503fa0bdf09","Type":"ContainerDied","Data":"fd2fc4641f6c3054a6b7505ab31e538096b06ac9dc4fb098aac3b7db7eb3a088"} Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.185095 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99p4m" event={"ID":"6f40510d-f93a-4a84-ad4a-e503fa0bdf09","Type":"ContainerStarted","Data":"48a18fbeb46be4236a22b36be0a73430c2b22ad985a28bbb6052b517677c98eb"} Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.191045 4757 generic.go:334] "Generic (PLEG): container finished" podID="42aab7ad-1293-4b39-8199-0b7f944a8f31" containerID="5b66943fc40caea793124ece65bb5ece104197c4395d6dd1033077c1c2ad594d" exitCode=0 Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.191118 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" event={"ID":"42aab7ad-1293-4b39-8199-0b7f944a8f31","Type":"ContainerDied","Data":"5b66943fc40caea793124ece65bb5ece104197c4395d6dd1033077c1c2ad594d"} Jan 29 15:13:21 crc kubenswrapper[4757]: E0129 15:13:21.212035 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.712016707 +0000 UTC m=+165.001266944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.212080 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.212376 4757 generic.go:334] "Generic (PLEG): container finished" podID="dacc418b-f809-4317-9526-08c5781c6f68" containerID="782c17b0ca95e95c1dfbc7c966fba7678ba47041f0793d682790c816c8351bde" exitCode=0 Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.212427 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" event={"ID":"dacc418b-f809-4317-9526-08c5781c6f68","Type":"ContainerDied","Data":"782c17b0ca95e95c1dfbc7c966fba7678ba47041f0793d682790c816c8351bde"} Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.212484 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:21 crc kubenswrapper[4757]: E0129 15:13:21.212782 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.71277515 +0000 UTC m=+165.002025387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.314126 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:21 crc kubenswrapper[4757]: E0129 15:13:21.314237 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.814220554 +0000 UTC m=+165.103470791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.314472 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:21 crc kubenswrapper[4757]: E0129 15:13:21.314775 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.814767551 +0000 UTC m=+165.104017788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.415932 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:21 crc kubenswrapper[4757]: E0129 15:13:21.416254 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:13:21.916237986 +0000 UTC m=+165.205488223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.446490 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.517196 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clvwv\" (UniqueName: \"kubernetes.io/projected/42aab7ad-1293-4b39-8199-0b7f944a8f31-kube-api-access-clvwv\") pod \"42aab7ad-1293-4b39-8199-0b7f944a8f31\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.517273 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-client-ca\") pod \"42aab7ad-1293-4b39-8199-0b7f944a8f31\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.517346 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42aab7ad-1293-4b39-8199-0b7f944a8f31-serving-cert\") pod \"42aab7ad-1293-4b39-8199-0b7f944a8f31\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.517531 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-proxy-ca-bundles\") pod \"42aab7ad-1293-4b39-8199-0b7f944a8f31\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.517590 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-config\") pod \"42aab7ad-1293-4b39-8199-0b7f944a8f31\" (UID: \"42aab7ad-1293-4b39-8199-0b7f944a8f31\") " Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.517848 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:21 crc kubenswrapper[4757]: E0129 15:13:21.518187 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:13:22.018173664 +0000 UTC m=+165.307423901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kjgkg" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.518751 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "42aab7ad-1293-4b39-8199-0b7f944a8f31" (UID: "42aab7ad-1293-4b39-8199-0b7f944a8f31"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.519486 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-client-ca" (OuterVolumeSpecName: "client-ca") pod "42aab7ad-1293-4b39-8199-0b7f944a8f31" (UID: "42aab7ad-1293-4b39-8199-0b7f944a8f31"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.519903 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-config" (OuterVolumeSpecName: "config") pod "42aab7ad-1293-4b39-8199-0b7f944a8f31" (UID: "42aab7ad-1293-4b39-8199-0b7f944a8f31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.525761 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42aab7ad-1293-4b39-8199-0b7f944a8f31-kube-api-access-clvwv" (OuterVolumeSpecName: "kube-api-access-clvwv") pod "42aab7ad-1293-4b39-8199-0b7f944a8f31" (UID: "42aab7ad-1293-4b39-8199-0b7f944a8f31"). InnerVolumeSpecName "kube-api-access-clvwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.526232 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42aab7ad-1293-4b39-8199-0b7f944a8f31-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "42aab7ad-1293-4b39-8199-0b7f944a8f31" (UID: "42aab7ad-1293-4b39-8199-0b7f944a8f31"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.545745 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.555811 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:21 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:21 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:21 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.555898 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.593317 4757 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T15:13:20.620968367Z","Handler":null,"Name":""} Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.604615 4757 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.604649 4757 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.619035 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84fvf\" (UniqueName: \"kubernetes.io/projected/dacc418b-f809-4317-9526-08c5781c6f68-kube-api-access-84fvf\") pod \"dacc418b-f809-4317-9526-08c5781c6f68\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.619073 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacc418b-f809-4317-9526-08c5781c6f68-config\") pod \"dacc418b-f809-4317-9526-08c5781c6f68\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.619253 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.619298 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacc418b-f809-4317-9526-08c5781c6f68-serving-cert\") pod \"dacc418b-f809-4317-9526-08c5781c6f68\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.619326 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dacc418b-f809-4317-9526-08c5781c6f68-client-ca\") pod \"dacc418b-f809-4317-9526-08c5781c6f68\" (UID: \"dacc418b-f809-4317-9526-08c5781c6f68\") " Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.619554 4757 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.619568 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.619577 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clvwv\" (UniqueName: \"kubernetes.io/projected/42aab7ad-1293-4b39-8199-0b7f944a8f31-kube-api-access-clvwv\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.619586 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42aab7ad-1293-4b39-8199-0b7f944a8f31-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.619594 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42aab7ad-1293-4b39-8199-0b7f944a8f31-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.620407 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacc418b-f809-4317-9526-08c5781c6f68-client-ca" (OuterVolumeSpecName: "client-ca") pod "dacc418b-f809-4317-9526-08c5781c6f68" (UID: "dacc418b-f809-4317-9526-08c5781c6f68"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.620719 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dacc418b-f809-4317-9526-08c5781c6f68-config" (OuterVolumeSpecName: "config") pod "dacc418b-f809-4317-9526-08c5781c6f68" (UID: "dacc418b-f809-4317-9526-08c5781c6f68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.623131 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dacc418b-f809-4317-9526-08c5781c6f68-kube-api-access-84fvf" (OuterVolumeSpecName: "kube-api-access-84fvf") pod "dacc418b-f809-4317-9526-08c5781c6f68" (UID: "dacc418b-f809-4317-9526-08c5781c6f68"). InnerVolumeSpecName "kube-api-access-84fvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.625608 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.625836 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dacc418b-f809-4317-9526-08c5781c6f68-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dacc418b-f809-4317-9526-08c5781c6f68" (UID: "dacc418b-f809-4317-9526-08c5781c6f68"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.721228 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.721365 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.721451 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84fvf\" (UniqueName: \"kubernetes.io/projected/dacc418b-f809-4317-9526-08c5781c6f68-kube-api-access-84fvf\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.721464 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dacc418b-f809-4317-9526-08c5781c6f68-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.721473 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dacc418b-f809-4317-9526-08c5781c6f68-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.721481 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dacc418b-f809-4317-9526-08c5781c6f68-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.725991 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c722d3b-1755-4633-967e-35591890a231-metrics-certs\") pod \"network-metrics-daemon-drtf8\" (UID: \"8c722d3b-1755-4633-967e-35591890a231\") " pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.733776 4757 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.733824 4757 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.740563 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-drtf8" Jan 29 15:13:21 crc kubenswrapper[4757]: I0129 15:13:21.833557 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kjgkg\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.012643 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.218680 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs"] Jan 29 15:13:22 crc kubenswrapper[4757]: E0129 15:13:22.218948 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2714a3de-d79d-40c1-8ff1-159ec48eae49" containerName="pruner" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.218964 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="2714a3de-d79d-40c1-8ff1-159ec48eae49" containerName="pruner" Jan 29 15:13:22 crc kubenswrapper[4757]: E0129 15:13:22.218980 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dacc418b-f809-4317-9526-08c5781c6f68" containerName="route-controller-manager" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.218988 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="dacc418b-f809-4317-9526-08c5781c6f68" containerName="route-controller-manager" Jan 29 15:13:22 crc kubenswrapper[4757]: E0129 15:13:22.218999 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42aab7ad-1293-4b39-8199-0b7f944a8f31" containerName="controller-manager" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.219006 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="42aab7ad-1293-4b39-8199-0b7f944a8f31" containerName="controller-manager" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.219122 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="42aab7ad-1293-4b39-8199-0b7f944a8f31" containerName="controller-manager" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.219141 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="dacc418b-f809-4317-9526-08c5781c6f68" containerName="route-controller-manager" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.219157 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="2714a3de-d79d-40c1-8ff1-159ec48eae49" containerName="pruner" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.219656 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b4f9d5889-8tfls"] Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.220244 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.220688 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.229597 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.239198 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-8tbgk" event={"ID":"42aab7ad-1293-4b39-8199-0b7f944a8f31","Type":"ContainerDied","Data":"414a35efbacc66d2da9e2c69276a7d50806840d711f8995f95f18ea1e63a5200"} Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.239292 4757 scope.go:117] "RemoveContainer" containerID="5b66943fc40caea793124ece65bb5ece104197c4395d6dd1033077c1c2ad594d" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.256291 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs"] Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.263426 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b4f9d5889-8tfls"] Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.269619 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" event={"ID":"dacc418b-f809-4317-9526-08c5781c6f68","Type":"ContainerDied","Data":"83e8897cb2c923640d6c9b2f2923cd88c5675980240b5173cc2fb05dd69d2d6a"} Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.269713 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.312111 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8tbgk"] Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.330882 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-client-ca\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.330945 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj27p\" (UniqueName: \"kubernetes.io/projected/ef07c13a-83ac-4354-9c37-9b4950dd9259-kube-api-access-dj27p\") pod \"route-controller-manager-57694ccfbc-5hsxs\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.330996 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef07c13a-83ac-4354-9c37-9b4950dd9259-client-ca\") pod \"route-controller-manager-57694ccfbc-5hsxs\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.331026 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls27d\" (UniqueName: \"kubernetes.io/projected/7301d2e8-5210-4188-a6dd-ba1244e29ed1-kube-api-access-ls27d\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.331066 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-proxy-ca-bundles\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.331090 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-config\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.331117 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef07c13a-83ac-4354-9c37-9b4950dd9259-serving-cert\") pod \"route-controller-manager-57694ccfbc-5hsxs\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.331140 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef07c13a-83ac-4354-9c37-9b4950dd9259-config\") pod \"route-controller-manager-57694ccfbc-5hsxs\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.331186 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7301d2e8-5210-4188-a6dd-ba1244e29ed1-serving-cert\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.332531 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-8tbgk"] Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.335413 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" event={"ID":"fa387e7d-5a82-4577-bbe3-ea5aeb17adc2","Type":"ContainerStarted","Data":"6aee85542d62fd15db593ab997703bd607e3b66e52d7f64e3bf24876aca5f4af"} Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.341128 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m"] Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.351134 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pf59m"] Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.382902 4757 scope.go:117] "RemoveContainer" containerID="782c17b0ca95e95c1dfbc7c966fba7678ba47041f0793d682790c816c8351bde" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.432446 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-client-ca\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.432498 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj27p\" (UniqueName: \"kubernetes.io/projected/ef07c13a-83ac-4354-9c37-9b4950dd9259-kube-api-access-dj27p\") pod \"route-controller-manager-57694ccfbc-5hsxs\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.432538 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef07c13a-83ac-4354-9c37-9b4950dd9259-client-ca\") pod \"route-controller-manager-57694ccfbc-5hsxs\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.432566 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls27d\" (UniqueName: \"kubernetes.io/projected/7301d2e8-5210-4188-a6dd-ba1244e29ed1-kube-api-access-ls27d\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.432639 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-proxy-ca-bundles\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.432661 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-config\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.432695 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef07c13a-83ac-4354-9c37-9b4950dd9259-serving-cert\") pod \"route-controller-manager-57694ccfbc-5hsxs\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.432716 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef07c13a-83ac-4354-9c37-9b4950dd9259-config\") pod \"route-controller-manager-57694ccfbc-5hsxs\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.432768 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7301d2e8-5210-4188-a6dd-ba1244e29ed1-serving-cert\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.434554 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef07c13a-83ac-4354-9c37-9b4950dd9259-client-ca\") pod \"route-controller-manager-57694ccfbc-5hsxs\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.436592 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef07c13a-83ac-4354-9c37-9b4950dd9259-serving-cert\") pod \"route-controller-manager-57694ccfbc-5hsxs\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.440783 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-proxy-ca-bundles\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.441003 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef07c13a-83ac-4354-9c37-9b4950dd9259-config\") pod \"route-controller-manager-57694ccfbc-5hsxs\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.441593 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-config\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.443606 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-client-ca\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.444920 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7301d2e8-5210-4188-a6dd-ba1244e29ed1-serving-cert\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.486326 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj27p\" (UniqueName: \"kubernetes.io/projected/ef07c13a-83ac-4354-9c37-9b4950dd9259-kube-api-access-dj27p\") pod \"route-controller-manager-57694ccfbc-5hsxs\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.490256 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls27d\" (UniqueName: \"kubernetes.io/projected/7301d2e8-5210-4188-a6dd-ba1244e29ed1-kube-api-access-ls27d\") pod \"controller-manager-6b4f9d5889-8tfls\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.553942 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:22 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:22 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:22 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.554046 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.601809 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-wsz9t" podStartSLOduration=26.601787142 podStartE2EDuration="26.601787142s" podCreationTimestamp="2026-01-29 15:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:22.365946399 +0000 UTC m=+165.655196646" watchObservedRunningTime="2026-01-29 15:13:22.601787142 +0000 UTC m=+165.891037379" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.603789 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kjgkg"] Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.609181 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.639728 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:22 crc kubenswrapper[4757]: I0129 15:13:22.690143 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-drtf8"] Jan 29 15:13:22 crc kubenswrapper[4757]: W0129 15:13:22.743711 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c722d3b_1755_4633_967e_35591890a231.slice/crio-5a88e121325608a242501a96dd9bbe8801319a17ceafb1cd486adb47e6a3951d WatchSource:0}: Error finding container 5a88e121325608a242501a96dd9bbe8801319a17ceafb1cd486adb47e6a3951d: Status 404 returned error can't find the container with id 5a88e121325608a242501a96dd9bbe8801319a17ceafb1cd486adb47e6a3951d Jan 29 15:13:23 crc kubenswrapper[4757]: I0129 15:13:23.309736 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b4f9d5889-8tfls"] Jan 29 15:13:23 crc kubenswrapper[4757]: I0129 15:13:23.394183 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs"] Jan 29 15:13:23 crc kubenswrapper[4757]: I0129 15:13:23.429819 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42aab7ad-1293-4b39-8199-0b7f944a8f31" path="/var/lib/kubelet/pods/42aab7ad-1293-4b39-8199-0b7f944a8f31/volumes" Jan 29 15:13:23 crc kubenswrapper[4757]: I0129 15:13:23.430999 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 15:13:23 crc kubenswrapper[4757]: I0129 15:13:23.431869 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dacc418b-f809-4317-9526-08c5781c6f68" path="/var/lib/kubelet/pods/dacc418b-f809-4317-9526-08c5781c6f68/volumes" Jan 29 15:13:23 crc kubenswrapper[4757]: I0129 15:13:23.433062 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" event={"ID":"7301d2e8-5210-4188-a6dd-ba1244e29ed1","Type":"ContainerStarted","Data":"d88d3ef9f179fbc9a2e8df871c27dba478d5e76b7ddb9ab94faeeeae199957f3"} Jan 29 15:13:23 crc kubenswrapper[4757]: I0129 15:13:23.451989 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-drtf8" event={"ID":"8c722d3b-1755-4633-967e-35591890a231","Type":"ContainerStarted","Data":"5a88e121325608a242501a96dd9bbe8801319a17ceafb1cd486adb47e6a3951d"} Jan 29 15:13:23 crc kubenswrapper[4757]: I0129 15:13:23.481149 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" event={"ID":"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13","Type":"ContainerStarted","Data":"b4e30bf9b9a7942d165538d84a162105f466bd380f1360c88ecebc6cd5755dbb"} Jan 29 15:13:23 crc kubenswrapper[4757]: I0129 15:13:23.481403 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" event={"ID":"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13","Type":"ContainerStarted","Data":"3c006a4d75ae5e8f508176455f4470ba56ab95c81d55cff3975ee1b51fc8cfe6"} Jan 29 15:13:23 crc kubenswrapper[4757]: I0129 15:13:23.481566 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:23 crc kubenswrapper[4757]: I0129 15:13:23.511388 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" podStartSLOduration=143.511371294 podStartE2EDuration="2m23.511371294s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:23.509782836 +0000 UTC m=+166.799033093" watchObservedRunningTime="2026-01-29 15:13:23.511371294 +0000 UTC m=+166.800621531" Jan 29 15:13:23 crc kubenswrapper[4757]: I0129 15:13:23.554233 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:23 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:23 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:23 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:23 crc kubenswrapper[4757]: I0129 15:13:23.554348 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:24 crc kubenswrapper[4757]: I0129 15:13:24.341323 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:13:24 crc kubenswrapper[4757]: I0129 15:13:24.348167 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-zrp48" Jan 29 15:13:24 crc kubenswrapper[4757]: I0129 15:13:24.537161 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" event={"ID":"ef07c13a-83ac-4354-9c37-9b4950dd9259","Type":"ContainerStarted","Data":"0175d7a005827fcb4733a9dc14da83e3ee4ba356458ca3edbc5acfdda3ab1bde"} Jan 29 15:13:24 crc kubenswrapper[4757]: I0129 15:13:24.563749 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:24 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:24 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:24 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:24 crc kubenswrapper[4757]: I0129 15:13:24.563799 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:25 crc kubenswrapper[4757]: I0129 15:13:25.562424 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:25 crc kubenswrapper[4757]: [-]has-synced failed: reason withheld Jan 29 15:13:25 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:25 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:25 crc kubenswrapper[4757]: I0129 15:13:25.562525 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:25 crc kubenswrapper[4757]: I0129 15:13:25.588732 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" event={"ID":"7301d2e8-5210-4188-a6dd-ba1244e29ed1","Type":"ContainerStarted","Data":"5919e94825f8b5b5be585438ad885d87fcc2be21c297791060b1e2a7dfa5f566"} Jan 29 15:13:25 crc kubenswrapper[4757]: I0129 15:13:25.595742 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" event={"ID":"ef07c13a-83ac-4354-9c37-9b4950dd9259","Type":"ContainerStarted","Data":"710b10a63c5eef14171d21a97ec73ae01b5cc2bc3c2c3a3f72eb8c9eb4358883"} Jan 29 15:13:26 crc kubenswrapper[4757]: I0129 15:13:26.551494 4757 patch_prober.go:28] interesting pod/router-default-5444994796-h9rvk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:13:26 crc kubenswrapper[4757]: [+]has-synced ok Jan 29 15:13:26 crc kubenswrapper[4757]: [+]process-running ok Jan 29 15:13:26 crc kubenswrapper[4757]: healthz check failed Jan 29 15:13:26 crc kubenswrapper[4757]: I0129 15:13:26.551571 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h9rvk" podUID="a5122101-998b-48d5-ae6e-c4746b2ba055" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:13:26 crc kubenswrapper[4757]: I0129 15:13:26.628226 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-drtf8" event={"ID":"8c722d3b-1755-4633-967e-35591890a231","Type":"ContainerStarted","Data":"1ba478ae1d81f6aa9077556d638234c38cee24da00592bf27830485470f6c7b4"} Jan 29 15:13:26 crc kubenswrapper[4757]: I0129 15:13:26.628607 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:26 crc kubenswrapper[4757]: I0129 15:13:26.635432 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:13:26 crc kubenswrapper[4757]: I0129 15:13:26.654229 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" podStartSLOduration=6.654195536 podStartE2EDuration="6.654195536s" podCreationTimestamp="2026-01-29 15:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:26.65070901 +0000 UTC m=+169.939959257" watchObservedRunningTime="2026-01-29 15:13:26.654195536 +0000 UTC m=+169.943445773" Jan 29 15:13:27 crc kubenswrapper[4757]: I0129 15:13:27.554243 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:13:27 crc kubenswrapper[4757]: I0129 15:13:27.573951 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h9rvk" Jan 29 15:13:27 crc kubenswrapper[4757]: I0129 15:13:27.692497 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:27 crc kubenswrapper[4757]: I0129 15:13:27.698415 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:13:27 crc kubenswrapper[4757]: I0129 15:13:27.739876 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" podStartSLOduration=7.739856736 podStartE2EDuration="7.739856736s" podCreationTimestamp="2026-01-29 15:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:27.713647694 +0000 UTC m=+171.002897931" watchObservedRunningTime="2026-01-29 15:13:27.739856736 +0000 UTC m=+171.029106963" Jan 29 15:13:28 crc kubenswrapper[4757]: I0129 15:13:28.759041 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-drtf8" event={"ID":"8c722d3b-1755-4633-967e-35591890a231","Type":"ContainerStarted","Data":"40d830c883a4d0bab1140285ab5cc793809ad14c1c48c5a8059282f3197f4a92"} Jan 29 15:13:28 crc kubenswrapper[4757]: I0129 15:13:28.791956 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-drtf8" podStartSLOduration=148.79193285 podStartE2EDuration="2m28.79193285s" podCreationTimestamp="2026-01-29 15:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:13:28.787240049 +0000 UTC m=+172.076490286" watchObservedRunningTime="2026-01-29 15:13:28.79193285 +0000 UTC m=+172.081183097" Jan 29 15:13:29 crc kubenswrapper[4757]: I0129 15:13:29.564421 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:13:29 crc kubenswrapper[4757]: I0129 15:13:29.568741 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:13:29 crc kubenswrapper[4757]: I0129 15:13:29.583669 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:29 crc kubenswrapper[4757]: I0129 15:13:29.583721 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:29 crc kubenswrapper[4757]: I0129 15:13:29.584942 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:29 crc kubenswrapper[4757]: I0129 15:13:29.585224 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:29 crc kubenswrapper[4757]: I0129 15:13:29.585288 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-gs77j" Jan 29 15:13:29 crc kubenswrapper[4757]: I0129 15:13:29.586650 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:29 crc kubenswrapper[4757]: I0129 15:13:29.586706 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:29 crc kubenswrapper[4757]: I0129 15:13:29.588250 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"d0bc679c69ac0e94034d7862caaaad4a3c9d97f01c26e41dd5cb4aff7667cfa3"} pod="openshift-console/downloads-7954f5f757-gs77j" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 15:13:29 crc kubenswrapper[4757]: I0129 15:13:29.588358 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" containerID="cri-o://d0bc679c69ac0e94034d7862caaaad4a3c9d97f01c26e41dd5cb4aff7667cfa3" gracePeriod=2 Jan 29 15:13:30 crc kubenswrapper[4757]: I0129 15:13:30.793013 4757 generic.go:334] "Generic (PLEG): container finished" podID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerID="d0bc679c69ac0e94034d7862caaaad4a3c9d97f01c26e41dd5cb4aff7667cfa3" exitCode=0 Jan 29 15:13:30 crc kubenswrapper[4757]: I0129 15:13:30.793059 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gs77j" event={"ID":"3e6ceaed-34b1-4c4f-abe3-96756d34e30f","Type":"ContainerDied","Data":"d0bc679c69ac0e94034d7862caaaad4a3c9d97f01c26e41dd5cb4aff7667cfa3"} Jan 29 15:13:31 crc kubenswrapper[4757]: I0129 15:13:31.816480 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gs77j" event={"ID":"3e6ceaed-34b1-4c4f-abe3-96756d34e30f","Type":"ContainerStarted","Data":"a1c9c67f63bcb74206fd53c9e3a9cee708887ea7b54b3e8a787f4defc9e8def2"} Jan 29 15:13:31 crc kubenswrapper[4757]: I0129 15:13:31.816847 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gs77j" Jan 29 15:13:31 crc kubenswrapper[4757]: I0129 15:13:31.816992 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:31 crc kubenswrapper[4757]: I0129 15:13:31.817024 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:32 crc kubenswrapper[4757]: I0129 15:13:32.824454 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:32 crc kubenswrapper[4757]: I0129 15:13:32.824772 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:37 crc kubenswrapper[4757]: I0129 15:13:37.199210 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b4f9d5889-8tfls"] Jan 29 15:13:37 crc kubenswrapper[4757]: I0129 15:13:37.199859 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" podUID="7301d2e8-5210-4188-a6dd-ba1244e29ed1" containerName="controller-manager" containerID="cri-o://5919e94825f8b5b5be585438ad885d87fcc2be21c297791060b1e2a7dfa5f566" gracePeriod=30 Jan 29 15:13:37 crc kubenswrapper[4757]: I0129 15:13:37.209386 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs"] Jan 29 15:13:37 crc kubenswrapper[4757]: I0129 15:13:37.209619 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" podUID="ef07c13a-83ac-4354-9c37-9b4950dd9259" containerName="route-controller-manager" containerID="cri-o://710b10a63c5eef14171d21a97ec73ae01b5cc2bc3c2c3a3f72eb8c9eb4358883" gracePeriod=30 Jan 29 15:13:38 crc kubenswrapper[4757]: I0129 15:13:38.861571 4757 generic.go:334] "Generic (PLEG): container finished" podID="7301d2e8-5210-4188-a6dd-ba1244e29ed1" containerID="5919e94825f8b5b5be585438ad885d87fcc2be21c297791060b1e2a7dfa5f566" exitCode=0 Jan 29 15:13:38 crc kubenswrapper[4757]: I0129 15:13:38.861677 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" event={"ID":"7301d2e8-5210-4188-a6dd-ba1244e29ed1","Type":"ContainerDied","Data":"5919e94825f8b5b5be585438ad885d87fcc2be21c297791060b1e2a7dfa5f566"} Jan 29 15:13:38 crc kubenswrapper[4757]: I0129 15:13:38.866552 4757 generic.go:334] "Generic (PLEG): container finished" podID="ef07c13a-83ac-4354-9c37-9b4950dd9259" containerID="710b10a63c5eef14171d21a97ec73ae01b5cc2bc3c2c3a3f72eb8c9eb4358883" exitCode=0 Jan 29 15:13:38 crc kubenswrapper[4757]: I0129 15:13:38.866601 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" event={"ID":"ef07c13a-83ac-4354-9c37-9b4950dd9259","Type":"ContainerDied","Data":"710b10a63c5eef14171d21a97ec73ae01b5cc2bc3c2c3a3f72eb8c9eb4358883"} Jan 29 15:13:39 crc kubenswrapper[4757]: I0129 15:13:39.583306 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:39 crc kubenswrapper[4757]: I0129 15:13:39.583381 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:39 crc kubenswrapper[4757]: I0129 15:13:39.583791 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:39 crc kubenswrapper[4757]: I0129 15:13:39.583817 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:42 crc kubenswrapper[4757]: I0129 15:13:42.020894 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:13:42 crc kubenswrapper[4757]: I0129 15:13:42.610662 4757 patch_prober.go:28] interesting pod/controller-manager-6b4f9d5889-8tfls container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 29 15:13:42 crc kubenswrapper[4757]: I0129 15:13:42.610716 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" podUID="7301d2e8-5210-4188-a6dd-ba1244e29ed1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 29 15:13:42 crc kubenswrapper[4757]: I0129 15:13:42.641738 4757 patch_prober.go:28] interesting pod/route-controller-manager-57694ccfbc-5hsxs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 29 15:13:42 crc kubenswrapper[4757]: I0129 15:13:42.641786 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" podUID="ef07c13a-83ac-4354-9c37-9b4950dd9259" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 29 15:13:47 crc kubenswrapper[4757]: I0129 15:13:47.034044 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:13:47 crc kubenswrapper[4757]: I0129 15:13:47.604579 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:13:47 crc kubenswrapper[4757]: I0129 15:13:47.604647 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:13:49 crc kubenswrapper[4757]: I0129 15:13:49.403935 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ssg7r" Jan 29 15:13:49 crc kubenswrapper[4757]: I0129 15:13:49.582197 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:49 crc kubenswrapper[4757]: I0129 15:13:49.582248 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:49 crc kubenswrapper[4757]: I0129 15:13:49.582243 4757 patch_prober.go:28] interesting pod/downloads-7954f5f757-gs77j container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 29 15:13:49 crc kubenswrapper[4757]: I0129 15:13:49.582318 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gs77j" podUID="3e6ceaed-34b1-4c4f-abe3-96756d34e30f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 29 15:13:52 crc kubenswrapper[4757]: I0129 15:13:52.611871 4757 patch_prober.go:28] interesting pod/controller-manager-6b4f9d5889-8tfls container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 29 15:13:52 crc kubenswrapper[4757]: I0129 15:13:52.612304 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" podUID="7301d2e8-5210-4188-a6dd-ba1244e29ed1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.113063 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.114244 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.116888 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.116942 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.121310 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.278853 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/799dd429-f9fe-4936-a5aa-62ab56ea855d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"799dd429-f9fe-4936-a5aa-62ab56ea855d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.278961 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/799dd429-f9fe-4936-a5aa-62ab56ea855d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"799dd429-f9fe-4936-a5aa-62ab56ea855d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.380389 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/799dd429-f9fe-4936-a5aa-62ab56ea855d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"799dd429-f9fe-4936-a5aa-62ab56ea855d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.380456 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/799dd429-f9fe-4936-a5aa-62ab56ea855d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"799dd429-f9fe-4936-a5aa-62ab56ea855d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.380539 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/799dd429-f9fe-4936-a5aa-62ab56ea855d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"799dd429-f9fe-4936-a5aa-62ab56ea855d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.414268 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/799dd429-f9fe-4936-a5aa-62ab56ea855d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"799dd429-f9fe-4936-a5aa-62ab56ea855d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.493151 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.643404 4757 patch_prober.go:28] interesting pod/route-controller-manager-57694ccfbc-5hsxs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 15:13:53 crc kubenswrapper[4757]: I0129 15:13:53.643474 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" podUID="ef07c13a-83ac-4354-9c37-9b4950dd9259" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 15:13:58 crc kubenswrapper[4757]: I0129 15:13:58.712803 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:13:58 crc kubenswrapper[4757]: I0129 15:13:58.713721 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:13:58 crc kubenswrapper[4757]: I0129 15:13:58.717380 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:13:58 crc kubenswrapper[4757]: I0129 15:13:58.763618 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:13:58 crc kubenswrapper[4757]: I0129 15:13:58.763674 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-var-lock\") pod \"installer-9-crc\" (UID: \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:13:58 crc kubenswrapper[4757]: I0129 15:13:58.763727 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-kube-api-access\") pod \"installer-9-crc\" (UID: \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:13:58 crc kubenswrapper[4757]: I0129 15:13:58.864811 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:13:58 crc kubenswrapper[4757]: I0129 15:13:58.864888 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-var-lock\") pod \"installer-9-crc\" (UID: \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:13:58 crc kubenswrapper[4757]: I0129 15:13:58.864930 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:13:58 crc kubenswrapper[4757]: I0129 15:13:58.864953 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-kube-api-access\") pod \"installer-9-crc\" (UID: \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:13:58 crc kubenswrapper[4757]: I0129 15:13:58.864977 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-var-lock\") pod \"installer-9-crc\" (UID: \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:13:58 crc kubenswrapper[4757]: I0129 15:13:58.880353 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-kube-api-access\") pod \"installer-9-crc\" (UID: \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:13:59 crc kubenswrapper[4757]: I0129 15:13:59.045755 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:13:59 crc kubenswrapper[4757]: I0129 15:13:59.585854 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-gs77j" Jan 29 15:14:02 crc kubenswrapper[4757]: I0129 15:14:02.610980 4757 patch_prober.go:28] interesting pod/controller-manager-6b4f9d5889-8tfls container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 29 15:14:02 crc kubenswrapper[4757]: I0129 15:14:02.611316 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" podUID="7301d2e8-5210-4188-a6dd-ba1244e29ed1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 29 15:14:03 crc kubenswrapper[4757]: I0129 15:14:03.641185 4757 patch_prober.go:28] interesting pod/route-controller-manager-57694ccfbc-5hsxs container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: i/o timeout" start-of-body= Jan 29 15:14:03 crc kubenswrapper[4757]: I0129 15:14:03.641252 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" podUID="ef07c13a-83ac-4354-9c37-9b4950dd9259" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: i/o timeout" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.388522 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.391985 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj27p\" (UniqueName: \"kubernetes.io/projected/ef07c13a-83ac-4354-9c37-9b4950dd9259-kube-api-access-dj27p\") pod \"ef07c13a-83ac-4354-9c37-9b4950dd9259\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.392063 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef07c13a-83ac-4354-9c37-9b4950dd9259-serving-cert\") pod \"ef07c13a-83ac-4354-9c37-9b4950dd9259\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.392101 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef07c13a-83ac-4354-9c37-9b4950dd9259-config\") pod \"ef07c13a-83ac-4354-9c37-9b4950dd9259\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.392140 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef07c13a-83ac-4354-9c37-9b4950dd9259-client-ca\") pod \"ef07c13a-83ac-4354-9c37-9b4950dd9259\" (UID: \"ef07c13a-83ac-4354-9c37-9b4950dd9259\") " Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.393184 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef07c13a-83ac-4354-9c37-9b4950dd9259-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef07c13a-83ac-4354-9c37-9b4950dd9259" (UID: "ef07c13a-83ac-4354-9c37-9b4950dd9259"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.393625 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef07c13a-83ac-4354-9c37-9b4950dd9259-config" (OuterVolumeSpecName: "config") pod "ef07c13a-83ac-4354-9c37-9b4950dd9259" (UID: "ef07c13a-83ac-4354-9c37-9b4950dd9259"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.398564 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef07c13a-83ac-4354-9c37-9b4950dd9259-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef07c13a-83ac-4354-9c37-9b4950dd9259" (UID: "ef07c13a-83ac-4354-9c37-9b4950dd9259"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.399873 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef07c13a-83ac-4354-9c37-9b4950dd9259-kube-api-access-dj27p" (OuterVolumeSpecName: "kube-api-access-dj27p") pod "ef07c13a-83ac-4354-9c37-9b4950dd9259" (UID: "ef07c13a-83ac-4354-9c37-9b4950dd9259"). InnerVolumeSpecName "kube-api-access-dj27p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.423288 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-868865878f-6fk46"] Jan 29 15:14:06 crc kubenswrapper[4757]: E0129 15:14:06.423552 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef07c13a-83ac-4354-9c37-9b4950dd9259" containerName="route-controller-manager" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.423569 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef07c13a-83ac-4354-9c37-9b4950dd9259" containerName="route-controller-manager" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.423705 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef07c13a-83ac-4354-9c37-9b4950dd9259" containerName="route-controller-manager" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.424174 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.435295 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-868865878f-6fk46"] Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.493967 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj27p\" (UniqueName: \"kubernetes.io/projected/ef07c13a-83ac-4354-9c37-9b4950dd9259-kube-api-access-dj27p\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.494029 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef07c13a-83ac-4354-9c37-9b4950dd9259-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.494045 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef07c13a-83ac-4354-9c37-9b4950dd9259-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.494059 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef07c13a-83ac-4354-9c37-9b4950dd9259-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.595347 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69d27087-23a4-497a-8b0e-30397961c886-client-ca\") pod \"route-controller-manager-868865878f-6fk46\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.595769 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69d27087-23a4-497a-8b0e-30397961c886-serving-cert\") pod \"route-controller-manager-868865878f-6fk46\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.595917 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd8wb\" (UniqueName: \"kubernetes.io/projected/69d27087-23a4-497a-8b0e-30397961c886-kube-api-access-vd8wb\") pod \"route-controller-manager-868865878f-6fk46\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.595963 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d27087-23a4-497a-8b0e-30397961c886-config\") pod \"route-controller-manager-868865878f-6fk46\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.697049 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69d27087-23a4-497a-8b0e-30397961c886-serving-cert\") pod \"route-controller-manager-868865878f-6fk46\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.697146 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd8wb\" (UniqueName: \"kubernetes.io/projected/69d27087-23a4-497a-8b0e-30397961c886-kube-api-access-vd8wb\") pod \"route-controller-manager-868865878f-6fk46\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.697181 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d27087-23a4-497a-8b0e-30397961c886-config\") pod \"route-controller-manager-868865878f-6fk46\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.697225 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69d27087-23a4-497a-8b0e-30397961c886-client-ca\") pod \"route-controller-manager-868865878f-6fk46\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.698313 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69d27087-23a4-497a-8b0e-30397961c886-client-ca\") pod \"route-controller-manager-868865878f-6fk46\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.700022 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d27087-23a4-497a-8b0e-30397961c886-config\") pod \"route-controller-manager-868865878f-6fk46\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.717550 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69d27087-23a4-497a-8b0e-30397961c886-serving-cert\") pod \"route-controller-manager-868865878f-6fk46\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.722017 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd8wb\" (UniqueName: \"kubernetes.io/projected/69d27087-23a4-497a-8b0e-30397961c886-kube-api-access-vd8wb\") pod \"route-controller-manager-868865878f-6fk46\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:06 crc kubenswrapper[4757]: I0129 15:14:06.772544 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:07 crc kubenswrapper[4757]: I0129 15:14:07.028510 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" event={"ID":"ef07c13a-83ac-4354-9c37-9b4950dd9259","Type":"ContainerDied","Data":"0175d7a005827fcb4733a9dc14da83e3ee4ba356458ca3edbc5acfdda3ab1bde"} Jan 29 15:14:07 crc kubenswrapper[4757]: I0129 15:14:07.028550 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs" Jan 29 15:14:07 crc kubenswrapper[4757]: I0129 15:14:07.028577 4757 scope.go:117] "RemoveContainer" containerID="710b10a63c5eef14171d21a97ec73ae01b5cc2bc3c2c3a3f72eb8c9eb4358883" Jan 29 15:14:07 crc kubenswrapper[4757]: I0129 15:14:07.054099 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs"] Jan 29 15:14:07 crc kubenswrapper[4757]: I0129 15:14:07.057997 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57694ccfbc-5hsxs"] Jan 29 15:14:07 crc kubenswrapper[4757]: I0129 15:14:07.405668 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef07c13a-83ac-4354-9c37-9b4950dd9259" path="/var/lib/kubelet/pods/ef07c13a-83ac-4354-9c37-9b4950dd9259/volumes" Jan 29 15:14:07 crc kubenswrapper[4757]: E0129 15:14:07.590513 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:14:07 crc kubenswrapper[4757]: E0129 15:14:07.590703 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr9fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-99p4m_openshift-marketplace(6f40510d-f93a-4a84-ad4a-e503fa0bdf09): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:14:07 crc kubenswrapper[4757]: E0129 15:14:07.591891 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:14:07 crc kubenswrapper[4757]: E0129 15:14:07.937765 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:14:07 crc kubenswrapper[4757]: E0129 15:14:07.937932 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-swndd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-v8v75_openshift-marketplace(bce413ab-1d96-4e66-b700-db27f6b52966): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:14:07 crc kubenswrapper[4757]: E0129 15:14:07.939086 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:14:13 crc kubenswrapper[4757]: I0129 15:14:13.610373 4757 patch_prober.go:28] interesting pod/controller-manager-6b4f9d5889-8tfls container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 15:14:13 crc kubenswrapper[4757]: I0129 15:14:13.610913 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" podUID="7301d2e8-5210-4188-a6dd-ba1244e29ed1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 15:14:14 crc kubenswrapper[4757]: E0129 15:14:14.227700 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:14:14 crc kubenswrapper[4757]: E0129 15:14:14.227734 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:14:14 crc kubenswrapper[4757]: E0129 15:14:14.619160 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:14:14 crc kubenswrapper[4757]: E0129 15:14:14.619486 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bg5b9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-57qth_openshift-marketplace(d4596539-1be7-44ac-8e25-3fd37c823166): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:14:14 crc kubenswrapper[4757]: E0129 15:14:14.620649 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:14:17 crc kubenswrapper[4757]: I0129 15:14:17.605164 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:14:17 crc kubenswrapper[4757]: I0129 15:14:17.605227 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:14:17 crc kubenswrapper[4757]: I0129 15:14:17.605305 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:14:17 crc kubenswrapper[4757]: I0129 15:14:17.605865 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0"} pod="openshift-machine-config-operator/machine-config-daemon-45q8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:14:17 crc kubenswrapper[4757]: I0129 15:14:17.605921 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" containerID="cri-o://4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0" gracePeriod=600 Jan 29 15:14:17 crc kubenswrapper[4757]: E0129 15:14:17.612184 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:14:19 crc kubenswrapper[4757]: I0129 15:14:19.098164 4757 generic.go:334] "Generic (PLEG): container finished" podID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerID="4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0" exitCode=0 Jan 29 15:14:19 crc kubenswrapper[4757]: I0129 15:14:19.098224 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerDied","Data":"4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0"} Jan 29 15:14:20 crc kubenswrapper[4757]: E0129 15:14:20.605514 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:14:20 crc kubenswrapper[4757]: E0129 15:14:20.605663 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjf2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pxw6w_openshift-marketplace(fd7070d7-3870-49f1-8976-094ad97b6efc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:14:20 crc kubenswrapper[4757]: E0129 15:14:20.607464 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:14:20 crc kubenswrapper[4757]: E0129 15:14:20.900664 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:14:20 crc kubenswrapper[4757]: E0129 15:14:20.901414 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8p6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c5pw7_openshift-marketplace(4e10b6b9-259a-417c-ba5d-311e75543637): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:14:20 crc kubenswrapper[4757]: E0129 15:14:20.901930 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:14:20 crc kubenswrapper[4757]: E0129 15:14:20.902019 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm2n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2jc8z_openshift-marketplace(43de85f7-11df-4e6f-8d3f-b982b03ce802): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:14:20 crc kubenswrapper[4757]: E0129 15:14:20.903074 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:14:20 crc kubenswrapper[4757]: E0129 15:14:20.903140 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:14:20 crc kubenswrapper[4757]: I0129 15:14:20.910841 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:14:20 crc kubenswrapper[4757]: E0129 15:14:20.950200 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:14:20 crc kubenswrapper[4757]: E0129 15:14:20.950382 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgwj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jhlrf_openshift-marketplace(92724a14-21db-441f-b509-142dc0a8dc15): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:14:20 crc kubenswrapper[4757]: E0129 15:14:20.951642 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:14:20 crc kubenswrapper[4757]: I0129 15:14:20.951693 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-99bd47d99-ts4xv"] Jan 29 15:14:20 crc kubenswrapper[4757]: E0129 15:14:20.951912 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7301d2e8-5210-4188-a6dd-ba1244e29ed1" containerName="controller-manager" Jan 29 15:14:20 crc kubenswrapper[4757]: I0129 15:14:20.951923 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="7301d2e8-5210-4188-a6dd-ba1244e29ed1" containerName="controller-manager" Jan 29 15:14:20 crc kubenswrapper[4757]: I0129 15:14:20.952016 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="7301d2e8-5210-4188-a6dd-ba1244e29ed1" containerName="controller-manager" Jan 29 15:14:20 crc kubenswrapper[4757]: I0129 15:14:20.952974 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:20 crc kubenswrapper[4757]: I0129 15:14:20.971846 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-99bd47d99-ts4xv"] Jan 29 15:14:21 crc kubenswrapper[4757]: E0129 15:14:21.015261 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:14:21 crc kubenswrapper[4757]: E0129 15:14:21.015863 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvvcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-btp4k_openshift-marketplace(f2342b27-9060-4697-a957-65d07f099e82): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:14:21 crc kubenswrapper[4757]: E0129 15:14:21.017109 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.084815 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-client-ca\") pod \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.084860 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-config\") pod \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.084940 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7301d2e8-5210-4188-a6dd-ba1244e29ed1-serving-cert\") pod \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.084961 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-proxy-ca-bundles\") pod \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.085009 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls27d\" (UniqueName: \"kubernetes.io/projected/7301d2e8-5210-4188-a6dd-ba1244e29ed1-kube-api-access-ls27d\") pod \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\" (UID: \"7301d2e8-5210-4188-a6dd-ba1244e29ed1\") " Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.085188 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-client-ca\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.085214 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-proxy-ca-bundles\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.085242 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-config\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.085257 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c76747ba-3b36-43df-9263-c56a783ce82f-serving-cert\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.085318 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnfn7\" (UniqueName: \"kubernetes.io/projected/c76747ba-3b36-43df-9263-c56a783ce82f-kube-api-access-gnfn7\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.085822 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-config" (OuterVolumeSpecName: "config") pod "7301d2e8-5210-4188-a6dd-ba1244e29ed1" (UID: "7301d2e8-5210-4188-a6dd-ba1244e29ed1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.085836 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-client-ca" (OuterVolumeSpecName: "client-ca") pod "7301d2e8-5210-4188-a6dd-ba1244e29ed1" (UID: "7301d2e8-5210-4188-a6dd-ba1244e29ed1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.088928 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7301d2e8-5210-4188-a6dd-ba1244e29ed1" (UID: "7301d2e8-5210-4188-a6dd-ba1244e29ed1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.092329 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7301d2e8-5210-4188-a6dd-ba1244e29ed1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7301d2e8-5210-4188-a6dd-ba1244e29ed1" (UID: "7301d2e8-5210-4188-a6dd-ba1244e29ed1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.094205 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.107421 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7301d2e8-5210-4188-a6dd-ba1244e29ed1-kube-api-access-ls27d" (OuterVolumeSpecName: "kube-api-access-ls27d") pod "7301d2e8-5210-4188-a6dd-ba1244e29ed1" (UID: "7301d2e8-5210-4188-a6dd-ba1244e29ed1"). InnerVolumeSpecName "kube-api-access-ls27d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.122200 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"989f6c946474d5c13e79a0e6cd5a831a42488fc707f84bbd376773aebb6df314"} Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.124860 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" event={"ID":"7301d2e8-5210-4188-a6dd-ba1244e29ed1","Type":"ContainerDied","Data":"d88d3ef9f179fbc9a2e8df871c27dba478d5e76b7ddb9ab94faeeeae199957f3"} Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.124890 4757 scope.go:117] "RemoveContainer" containerID="5919e94825f8b5b5be585438ad885d87fcc2be21c297791060b1e2a7dfa5f566" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.124949 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4f9d5889-8tfls" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.139026 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae","Type":"ContainerStarted","Data":"a2ac5578999ebfded166ff69fc1e3fe914ebdfeb39fc12cfecda3323a502d7e0"} Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.169098 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b4f9d5889-8tfls"] Jan 29 15:14:21 crc kubenswrapper[4757]: E0129 15:14:21.169360 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:14:21 crc kubenswrapper[4757]: E0129 15:14:21.169496 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.174731 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b4f9d5889-8tfls"] Jan 29 15:14:21 crc kubenswrapper[4757]: E0129 15:14:21.178481 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:14:21 crc kubenswrapper[4757]: E0129 15:14:21.178617 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:14:21 crc kubenswrapper[4757]: E0129 15:14:21.178672 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.186791 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnfn7\" (UniqueName: \"kubernetes.io/projected/c76747ba-3b36-43df-9263-c56a783ce82f-kube-api-access-gnfn7\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.189296 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-client-ca\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.192687 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-proxy-ca-bundles\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.192912 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-config\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.194660 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c76747ba-3b36-43df-9263-c56a783ce82f-serving-cert\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.194851 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-proxy-ca-bundles\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.192468 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-client-ca\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.194622 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-config\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.195305 4757 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.195325 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls27d\" (UniqueName: \"kubernetes.io/projected/7301d2e8-5210-4188-a6dd-ba1244e29ed1-kube-api-access-ls27d\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.195339 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.195347 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7301d2e8-5210-4188-a6dd-ba1244e29ed1-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.195356 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7301d2e8-5210-4188-a6dd-ba1244e29ed1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.202355 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c76747ba-3b36-43df-9263-c56a783ce82f-serving-cert\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.213002 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnfn7\" (UniqueName: \"kubernetes.io/projected/c76747ba-3b36-43df-9263-c56a783ce82f-kube-api-access-gnfn7\") pod \"controller-manager-99bd47d99-ts4xv\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.255090 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.280935 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.354108 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-868865878f-6fk46"] Jan 29 15:14:21 crc kubenswrapper[4757]: W0129 15:14:21.380721 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69d27087_23a4_497a_8b0e_30397961c886.slice/crio-7aa0fb1c6ddae56a3166f86e59f239760460b2456b7f65bf1535d48bc42d506f WatchSource:0}: Error finding container 7aa0fb1c6ddae56a3166f86e59f239760460b2456b7f65bf1535d48bc42d506f: Status 404 returned error can't find the container with id 7aa0fb1c6ddae56a3166f86e59f239760460b2456b7f65bf1535d48bc42d506f Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.409984 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7301d2e8-5210-4188-a6dd-ba1244e29ed1" path="/var/lib/kubelet/pods/7301d2e8-5210-4188-a6dd-ba1244e29ed1/volumes" Jan 29 15:14:21 crc kubenswrapper[4757]: I0129 15:14:21.733444 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-99bd47d99-ts4xv"] Jan 29 15:14:21 crc kubenswrapper[4757]: W0129 15:14:21.741063 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc76747ba_3b36_43df_9263_c56a783ce82f.slice/crio-b68fb0e59eca7596a6e529e14c6a548a4f4302d23606fe9c9caab8a47d3ea7b0 WatchSource:0}: Error finding container b68fb0e59eca7596a6e529e14c6a548a4f4302d23606fe9c9caab8a47d3ea7b0: Status 404 returned error can't find the container with id b68fb0e59eca7596a6e529e14c6a548a4f4302d23606fe9c9caab8a47d3ea7b0 Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.172150 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" event={"ID":"69d27087-23a4-497a-8b0e-30397961c886","Type":"ContainerStarted","Data":"3f14b0979e996f66a08e35b6e288e64d0b200664c2762cb8b126f01b7a4c641e"} Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.172191 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" event={"ID":"69d27087-23a4-497a-8b0e-30397961c886","Type":"ContainerStarted","Data":"7aa0fb1c6ddae56a3166f86e59f239760460b2456b7f65bf1535d48bc42d506f"} Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.172559 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.175538 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" event={"ID":"c76747ba-3b36-43df-9263-c56a783ce82f","Type":"ContainerStarted","Data":"1020fb9050c71d661b955823624f5cfd582b6d2784873e1bc39087e27c91ff61"} Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.175576 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" event={"ID":"c76747ba-3b36-43df-9263-c56a783ce82f","Type":"ContainerStarted","Data":"b68fb0e59eca7596a6e529e14c6a548a4f4302d23606fe9c9caab8a47d3ea7b0"} Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.176491 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.178095 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae","Type":"ContainerStarted","Data":"1dd625dcb5615e55e288e16c66873b143a3712038603e318892fcf8d4de00870"} Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.180713 4757 generic.go:334] "Generic (PLEG): container finished" podID="799dd429-f9fe-4936-a5aa-62ab56ea855d" containerID="48fa8505dcb945ab5bd73662bb53f6cfdf5f111f525ece3bef0579077f73f775" exitCode=0 Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.180769 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"799dd429-f9fe-4936-a5aa-62ab56ea855d","Type":"ContainerDied","Data":"48fa8505dcb945ab5bd73662bb53f6cfdf5f111f525ece3bef0579077f73f775"} Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.180803 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"799dd429-f9fe-4936-a5aa-62ab56ea855d","Type":"ContainerStarted","Data":"a2828a955bd4c15e51ba8627cbf9d81bd4d4fad6a24d6ed5940f1f6e63991074"} Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.185245 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.221999 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" podStartSLOduration=45.221977140999996 podStartE2EDuration="45.221977141s" podCreationTimestamp="2026-01-29 15:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:14:22.20207832 +0000 UTC m=+225.491328577" watchObservedRunningTime="2026-01-29 15:14:22.221977141 +0000 UTC m=+225.511227378" Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.222743 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" podStartSLOduration=45.222735214 podStartE2EDuration="45.222735214s" podCreationTimestamp="2026-01-29 15:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:14:22.217603339 +0000 UTC m=+225.506853576" watchObservedRunningTime="2026-01-29 15:14:22.222735214 +0000 UTC m=+225.511985461" Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.271610 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=24.27159055 podStartE2EDuration="24.27159055s" podCreationTimestamp="2026-01-29 15:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:14:22.270483726 +0000 UTC m=+225.559733983" watchObservedRunningTime="2026-01-29 15:14:22.27159055 +0000 UTC m=+225.560840787" Jan 29 15:14:22 crc kubenswrapper[4757]: I0129 15:14:22.354165 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:23 crc kubenswrapper[4757]: I0129 15:14:23.460098 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:14:23 crc kubenswrapper[4757]: I0129 15:14:23.533522 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/799dd429-f9fe-4936-a5aa-62ab56ea855d-kubelet-dir\") pod \"799dd429-f9fe-4936-a5aa-62ab56ea855d\" (UID: \"799dd429-f9fe-4936-a5aa-62ab56ea855d\") " Jan 29 15:14:23 crc kubenswrapper[4757]: I0129 15:14:23.533621 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/799dd429-f9fe-4936-a5aa-62ab56ea855d-kube-api-access\") pod \"799dd429-f9fe-4936-a5aa-62ab56ea855d\" (UID: \"799dd429-f9fe-4936-a5aa-62ab56ea855d\") " Jan 29 15:14:23 crc kubenswrapper[4757]: I0129 15:14:23.534735 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/799dd429-f9fe-4936-a5aa-62ab56ea855d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "799dd429-f9fe-4936-a5aa-62ab56ea855d" (UID: "799dd429-f9fe-4936-a5aa-62ab56ea855d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:14:23 crc kubenswrapper[4757]: I0129 15:14:23.539224 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/799dd429-f9fe-4936-a5aa-62ab56ea855d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "799dd429-f9fe-4936-a5aa-62ab56ea855d" (UID: "799dd429-f9fe-4936-a5aa-62ab56ea855d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:14:23 crc kubenswrapper[4757]: I0129 15:14:23.635343 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/799dd429-f9fe-4936-a5aa-62ab56ea855d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:23 crc kubenswrapper[4757]: I0129 15:14:23.635657 4757 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/799dd429-f9fe-4936-a5aa-62ab56ea855d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:24 crc kubenswrapper[4757]: I0129 15:14:24.192178 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:14:24 crc kubenswrapper[4757]: I0129 15:14:24.196393 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"799dd429-f9fe-4936-a5aa-62ab56ea855d","Type":"ContainerDied","Data":"a2828a955bd4c15e51ba8627cbf9d81bd4d4fad6a24d6ed5940f1f6e63991074"} Jan 29 15:14:24 crc kubenswrapper[4757]: I0129 15:14:24.196461 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2828a955bd4c15e51ba8627cbf9d81bd4d4fad6a24d6ed5940f1f6e63991074" Jan 29 15:14:24 crc kubenswrapper[4757]: E0129 15:14:24.531142 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:14:24 crc kubenswrapper[4757]: E0129 15:14:24.531334 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr9fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-99p4m_openshift-marketplace(6f40510d-f93a-4a84-ad4a-e503fa0bdf09): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:14:24 crc kubenswrapper[4757]: E0129 15:14:24.532528 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:14:28 crc kubenswrapper[4757]: E0129 15:14:28.522159 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:14:28 crc kubenswrapper[4757]: E0129 15:14:28.523548 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-swndd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-v8v75_openshift-marketplace(bce413ab-1d96-4e66-b700-db27f6b52966): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:14:28 crc kubenswrapper[4757]: E0129 15:14:28.524768 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:14:30 crc kubenswrapper[4757]: E0129 15:14:30.524126 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:14:30 crc kubenswrapper[4757]: E0129 15:14:30.524716 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bg5b9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-57qth_openshift-marketplace(d4596539-1be7-44ac-8e25-3fd37c823166): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:14:30 crc kubenswrapper[4757]: E0129 15:14:30.526003 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:14:31 crc kubenswrapper[4757]: E0129 15:14:31.516973 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:14:31 crc kubenswrapper[4757]: E0129 15:14:31.517417 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvvcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-btp4k_openshift-marketplace(f2342b27-9060-4697-a957-65d07f099e82): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:14:31 crc kubenswrapper[4757]: E0129 15:14:31.518761 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:14:31 crc kubenswrapper[4757]: E0129 15:14:31.522420 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:14:31 crc kubenswrapper[4757]: E0129 15:14:31.522589 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgwj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jhlrf_openshift-marketplace(92724a14-21db-441f-b509-142dc0a8dc15): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:14:31 crc kubenswrapper[4757]: E0129 15:14:31.523731 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:14:34 crc kubenswrapper[4757]: E0129 15:14:34.531600 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:14:34 crc kubenswrapper[4757]: E0129 15:14:34.532016 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjf2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pxw6w_openshift-marketplace(fd7070d7-3870-49f1-8976-094ad97b6efc): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:14:34 crc kubenswrapper[4757]: E0129 15:14:34.533495 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:14:34 crc kubenswrapper[4757]: E0129 15:14:34.555443 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:14:34 crc kubenswrapper[4757]: E0129 15:14:34.555732 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8p6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c5pw7_openshift-marketplace(4e10b6b9-259a-417c-ba5d-311e75543637): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:14:34 crc kubenswrapper[4757]: E0129 15:14:34.557667 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:14:35 crc kubenswrapper[4757]: E0129 15:14:35.544595 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:14:35 crc kubenswrapper[4757]: E0129 15:14:35.544730 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm2n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2jc8z_openshift-marketplace(43de85f7-11df-4e6f-8d3f-b982b03ce802): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:14:35 crc kubenswrapper[4757]: E0129 15:14:35.546080 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:14:36 crc kubenswrapper[4757]: E0129 15:14:36.397060 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:14:41 crc kubenswrapper[4757]: E0129 15:14:41.399393 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:14:42 crc kubenswrapper[4757]: E0129 15:14:42.397840 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:14:43 crc kubenswrapper[4757]: E0129 15:14:43.397488 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:14:44 crc kubenswrapper[4757]: E0129 15:14:44.397397 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:14:48 crc kubenswrapper[4757]: E0129 15:14:48.404489 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:14:48 crc kubenswrapper[4757]: E0129 15:14:48.405751 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:14:48 crc kubenswrapper[4757]: I0129 15:14:48.753241 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mg555"] Jan 29 15:14:49 crc kubenswrapper[4757]: E0129 15:14:49.399027 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:14:50 crc kubenswrapper[4757]: E0129 15:14:50.519048 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:14:50 crc kubenswrapper[4757]: E0129 15:14:50.519600 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr9fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-99p4m_openshift-marketplace(6f40510d-f93a-4a84-ad4a-e503fa0bdf09): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:14:50 crc kubenswrapper[4757]: E0129 15:14:50.521222 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:14:55 crc kubenswrapper[4757]: E0129 15:14:55.519215 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:14:55 crc kubenswrapper[4757]: E0129 15:14:55.519783 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bg5b9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-57qth_openshift-marketplace(d4596539-1be7-44ac-8e25-3fd37c823166): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:14:55 crc kubenswrapper[4757]: E0129 15:14:55.520987 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:14:55 crc kubenswrapper[4757]: E0129 15:14:55.539744 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:14:55 crc kubenswrapper[4757]: E0129 15:14:55.539901 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-swndd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-v8v75_openshift-marketplace(bce413ab-1d96-4e66-b700-db27f6b52966): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:14:55 crc kubenswrapper[4757]: E0129 15:14:55.541557 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:14:56 crc kubenswrapper[4757]: E0129 15:14:56.517838 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:14:56 crc kubenswrapper[4757]: E0129 15:14:56.517968 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgwj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jhlrf_openshift-marketplace(92724a14-21db-441f-b509-142dc0a8dc15): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:14:56 crc kubenswrapper[4757]: E0129 15:14:56.519546 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.230109 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-99bd47d99-ts4xv"] Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.230347 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" podUID="c76747ba-3b36-43df-9263-c56a783ce82f" containerName="controller-manager" containerID="cri-o://1020fb9050c71d661b955823624f5cfd582b6d2784873e1bc39087e27c91ff61" gracePeriod=30 Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.329097 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-868865878f-6fk46"] Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.329521 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" podUID="69d27087-23a4-497a-8b0e-30397961c886" containerName="route-controller-manager" containerID="cri-o://3f14b0979e996f66a08e35b6e288e64d0b200664c2762cb8b126f01b7a4c641e" gracePeriod=30 Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.354090 4757 generic.go:334] "Generic (PLEG): container finished" podID="c76747ba-3b36-43df-9263-c56a783ce82f" containerID="1020fb9050c71d661b955823624f5cfd582b6d2784873e1bc39087e27c91ff61" exitCode=0 Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.354146 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" event={"ID":"c76747ba-3b36-43df-9263-c56a783ce82f","Type":"ContainerDied","Data":"1020fb9050c71d661b955823624f5cfd582b6d2784873e1bc39087e27c91ff61"} Jan 29 15:14:57 crc kubenswrapper[4757]: E0129 15:14:57.518622 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:14:57 crc kubenswrapper[4757]: E0129 15:14:57.518740 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvvcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-btp4k_openshift-marketplace(f2342b27-9060-4697-a957-65d07f099e82): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:14:57 crc kubenswrapper[4757]: E0129 15:14:57.520171 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.665402 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.734615 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.758169 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-proxy-ca-bundles\") pod \"c76747ba-3b36-43df-9263-c56a783ce82f\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.758244 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-config\") pod \"c76747ba-3b36-43df-9263-c56a783ce82f\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.758300 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69d27087-23a4-497a-8b0e-30397961c886-serving-cert\") pod \"69d27087-23a4-497a-8b0e-30397961c886\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.758325 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnfn7\" (UniqueName: \"kubernetes.io/projected/c76747ba-3b36-43df-9263-c56a783ce82f-kube-api-access-gnfn7\") pod \"c76747ba-3b36-43df-9263-c56a783ce82f\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.758346 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d27087-23a4-497a-8b0e-30397961c886-config\") pod \"69d27087-23a4-497a-8b0e-30397961c886\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.758365 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-client-ca\") pod \"c76747ba-3b36-43df-9263-c56a783ce82f\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.758387 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd8wb\" (UniqueName: \"kubernetes.io/projected/69d27087-23a4-497a-8b0e-30397961c886-kube-api-access-vd8wb\") pod \"69d27087-23a4-497a-8b0e-30397961c886\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.758409 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69d27087-23a4-497a-8b0e-30397961c886-client-ca\") pod \"69d27087-23a4-497a-8b0e-30397961c886\" (UID: \"69d27087-23a4-497a-8b0e-30397961c886\") " Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.758425 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c76747ba-3b36-43df-9263-c56a783ce82f-serving-cert\") pod \"c76747ba-3b36-43df-9263-c56a783ce82f\" (UID: \"c76747ba-3b36-43df-9263-c56a783ce82f\") " Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.760327 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-config" (OuterVolumeSpecName: "config") pod "c76747ba-3b36-43df-9263-c56a783ce82f" (UID: "c76747ba-3b36-43df-9263-c56a783ce82f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.760614 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-client-ca" (OuterVolumeSpecName: "client-ca") pod "c76747ba-3b36-43df-9263-c56a783ce82f" (UID: "c76747ba-3b36-43df-9263-c56a783ce82f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.760667 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c76747ba-3b36-43df-9263-c56a783ce82f" (UID: "c76747ba-3b36-43df-9263-c56a783ce82f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.760984 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69d27087-23a4-497a-8b0e-30397961c886-client-ca" (OuterVolumeSpecName: "client-ca") pod "69d27087-23a4-497a-8b0e-30397961c886" (UID: "69d27087-23a4-497a-8b0e-30397961c886"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.761151 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69d27087-23a4-497a-8b0e-30397961c886-config" (OuterVolumeSpecName: "config") pod "69d27087-23a4-497a-8b0e-30397961c886" (UID: "69d27087-23a4-497a-8b0e-30397961c886"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.763704 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d27087-23a4-497a-8b0e-30397961c886-kube-api-access-vd8wb" (OuterVolumeSpecName: "kube-api-access-vd8wb") pod "69d27087-23a4-497a-8b0e-30397961c886" (UID: "69d27087-23a4-497a-8b0e-30397961c886"). InnerVolumeSpecName "kube-api-access-vd8wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.763942 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c76747ba-3b36-43df-9263-c56a783ce82f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c76747ba-3b36-43df-9263-c56a783ce82f" (UID: "c76747ba-3b36-43df-9263-c56a783ce82f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.764347 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d27087-23a4-497a-8b0e-30397961c886-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "69d27087-23a4-497a-8b0e-30397961c886" (UID: "69d27087-23a4-497a-8b0e-30397961c886"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.764725 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c76747ba-3b36-43df-9263-c56a783ce82f-kube-api-access-gnfn7" (OuterVolumeSpecName: "kube-api-access-gnfn7") pod "c76747ba-3b36-43df-9263-c56a783ce82f" (UID: "c76747ba-3b36-43df-9263-c56a783ce82f"). InnerVolumeSpecName "kube-api-access-gnfn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.859141 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c76747ba-3b36-43df-9263-c56a783ce82f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.859177 4757 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.859197 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.859208 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69d27087-23a4-497a-8b0e-30397961c886-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.859221 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnfn7\" (UniqueName: \"kubernetes.io/projected/c76747ba-3b36-43df-9263-c56a783ce82f-kube-api-access-gnfn7\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.859233 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d27087-23a4-497a-8b0e-30397961c886-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.859244 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c76747ba-3b36-43df-9263-c56a783ce82f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.859256 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd8wb\" (UniqueName: \"kubernetes.io/projected/69d27087-23a4-497a-8b0e-30397961c886-kube-api-access-vd8wb\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:57 crc kubenswrapper[4757]: I0129 15:14:57.859281 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69d27087-23a4-497a-8b0e-30397961c886-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.328430 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5"] Jan 29 15:14:58 crc kubenswrapper[4757]: E0129 15:14:58.328637 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c76747ba-3b36-43df-9263-c56a783ce82f" containerName="controller-manager" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.328655 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="c76747ba-3b36-43df-9263-c56a783ce82f" containerName="controller-manager" Jan 29 15:14:58 crc kubenswrapper[4757]: E0129 15:14:58.328675 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d27087-23a4-497a-8b0e-30397961c886" containerName="route-controller-manager" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.328682 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d27087-23a4-497a-8b0e-30397961c886" containerName="route-controller-manager" Jan 29 15:14:58 crc kubenswrapper[4757]: E0129 15:14:58.328694 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799dd429-f9fe-4936-a5aa-62ab56ea855d" containerName="pruner" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.328701 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="799dd429-f9fe-4936-a5aa-62ab56ea855d" containerName="pruner" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.328838 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="799dd429-f9fe-4936-a5aa-62ab56ea855d" containerName="pruner" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.328857 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d27087-23a4-497a-8b0e-30397961c886" containerName="route-controller-manager" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.328867 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="c76747ba-3b36-43df-9263-c56a783ce82f" containerName="controller-manager" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.329300 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.349641 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5"] Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.362525 4757 generic.go:334] "Generic (PLEG): container finished" podID="69d27087-23a4-497a-8b0e-30397961c886" containerID="3f14b0979e996f66a08e35b6e288e64d0b200664c2762cb8b126f01b7a4c641e" exitCode=0 Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.362621 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" event={"ID":"69d27087-23a4-497a-8b0e-30397961c886","Type":"ContainerDied","Data":"3f14b0979e996f66a08e35b6e288e64d0b200664c2762cb8b126f01b7a4c641e"} Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.362660 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" event={"ID":"69d27087-23a4-497a-8b0e-30397961c886","Type":"ContainerDied","Data":"7aa0fb1c6ddae56a3166f86e59f239760460b2456b7f65bf1535d48bc42d506f"} Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.362680 4757 scope.go:117] "RemoveContainer" containerID="3f14b0979e996f66a08e35b6e288e64d0b200664c2762cb8b126f01b7a4c641e" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.363061 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-868865878f-6fk46" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.364840 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxjs6\" (UniqueName: \"kubernetes.io/projected/7fea9b3d-4277-4a7f-92e6-23c5431051e4-kube-api-access-wxjs6\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.364903 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-proxy-ca-bundles\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.364953 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-client-ca\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.365012 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-config\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.365095 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fea9b3d-4277-4a7f-92e6-23c5431051e4-serving-cert\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.365419 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" event={"ID":"c76747ba-3b36-43df-9263-c56a783ce82f","Type":"ContainerDied","Data":"b68fb0e59eca7596a6e529e14c6a548a4f4302d23606fe9c9caab8a47d3ea7b0"} Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.365513 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99bd47d99-ts4xv" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.382002 4757 scope.go:117] "RemoveContainer" containerID="3f14b0979e996f66a08e35b6e288e64d0b200664c2762cb8b126f01b7a4c641e" Jan 29 15:14:58 crc kubenswrapper[4757]: E0129 15:14:58.382525 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f14b0979e996f66a08e35b6e288e64d0b200664c2762cb8b126f01b7a4c641e\": container with ID starting with 3f14b0979e996f66a08e35b6e288e64d0b200664c2762cb8b126f01b7a4c641e not found: ID does not exist" containerID="3f14b0979e996f66a08e35b6e288e64d0b200664c2762cb8b126f01b7a4c641e" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.382620 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f14b0979e996f66a08e35b6e288e64d0b200664c2762cb8b126f01b7a4c641e"} err="failed to get container status \"3f14b0979e996f66a08e35b6e288e64d0b200664c2762cb8b126f01b7a4c641e\": rpc error: code = NotFound desc = could not find container \"3f14b0979e996f66a08e35b6e288e64d0b200664c2762cb8b126f01b7a4c641e\": container with ID starting with 3f14b0979e996f66a08e35b6e288e64d0b200664c2762cb8b126f01b7a4c641e not found: ID does not exist" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.382667 4757 scope.go:117] "RemoveContainer" containerID="1020fb9050c71d661b955823624f5cfd582b6d2784873e1bc39087e27c91ff61" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.412299 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-868865878f-6fk46"] Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.422003 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-868865878f-6fk46"] Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.426928 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-99bd47d99-ts4xv"] Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.430952 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-99bd47d99-ts4xv"] Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.465636 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxjs6\" (UniqueName: \"kubernetes.io/projected/7fea9b3d-4277-4a7f-92e6-23c5431051e4-kube-api-access-wxjs6\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.465687 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-proxy-ca-bundles\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.465712 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-client-ca\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.465741 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-config\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.465830 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fea9b3d-4277-4a7f-92e6-23c5431051e4-serving-cert\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.467633 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-client-ca\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.468121 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-proxy-ca-bundles\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.468829 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-config\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.473572 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fea9b3d-4277-4a7f-92e6-23c5431051e4-serving-cert\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.490644 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxjs6\" (UniqueName: \"kubernetes.io/projected/7fea9b3d-4277-4a7f-92e6-23c5431051e4-kube-api-access-wxjs6\") pod \"controller-manager-85bc4bdcd-5zkz5\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:58 crc kubenswrapper[4757]: I0129 15:14:58.687836 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.000048 4757 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.000756 4757 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.000921 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.001042 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978" gracePeriod=15 Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.001069 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674" gracePeriod=15 Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.001145 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9" gracePeriod=15 Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.001147 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf" gracePeriod=15 Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.001152 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce" gracePeriod=15 Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.001822 4757 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.002075 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002090 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.002103 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002111 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.002120 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002128 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.002138 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002149 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.002159 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002166 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.002183 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002191 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.002207 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002215 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002431 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002447 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002461 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002473 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002483 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002491 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.002610 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002620 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.002745 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.050962 4757 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.176181 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.176630 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.176671 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.176705 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.177088 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.177192 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.177220 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.177233 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.277948 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.278004 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.278028 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.278059 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.278109 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.278108 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.278121 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.278139 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.278179 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.278178 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.278840 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.278870 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.278913 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.279002 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.279383 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.279415 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.289993 4757 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 15:14:59 crc kubenswrapper[4757]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager_7fea9b3d-4277-4a7f-92e6-23c5431051e4_0(c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823): error adding pod openshift-controller-manager_controller-manager-85bc4bdcd-5zkz5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823" Netns:"/var/run/netns/4dd49520-bec3-4c7b-9050-01d2bdd53ec6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-85bc4bdcd-5zkz5;K8S_POD_INFRA_CONTAINER_ID=c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823;K8S_POD_UID=7fea9b3d-4277-4a7f-92e6-23c5431051e4" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5] networking: Multus: [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5/7fea9b3d-4277-4a7f-92e6-23c5431051e4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85bc4bdcd-5zkz5?timeout=1m0s": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:14:59 crc kubenswrapper[4757]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:14:59 crc kubenswrapper[4757]: > Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.290122 4757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 15:14:59 crc kubenswrapper[4757]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager_7fea9b3d-4277-4a7f-92e6-23c5431051e4_0(c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823): error adding pod openshift-controller-manager_controller-manager-85bc4bdcd-5zkz5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823" Netns:"/var/run/netns/4dd49520-bec3-4c7b-9050-01d2bdd53ec6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-85bc4bdcd-5zkz5;K8S_POD_INFRA_CONTAINER_ID=c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823;K8S_POD_UID=7fea9b3d-4277-4a7f-92e6-23c5431051e4" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5] networking: Multus: [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5/7fea9b3d-4277-4a7f-92e6-23c5431051e4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85bc4bdcd-5zkz5?timeout=1m0s": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:14:59 crc kubenswrapper[4757]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:14:59 crc kubenswrapper[4757]: > pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.290150 4757 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 29 15:14:59 crc kubenswrapper[4757]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager_7fea9b3d-4277-4a7f-92e6-23c5431051e4_0(c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823): error adding pod openshift-controller-manager_controller-manager-85bc4bdcd-5zkz5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823" Netns:"/var/run/netns/4dd49520-bec3-4c7b-9050-01d2bdd53ec6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-85bc4bdcd-5zkz5;K8S_POD_INFRA_CONTAINER_ID=c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823;K8S_POD_UID=7fea9b3d-4277-4a7f-92e6-23c5431051e4" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5] networking: Multus: [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5/7fea9b3d-4277-4a7f-92e6-23c5431051e4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85bc4bdcd-5zkz5?timeout=1m0s": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:14:59 crc kubenswrapper[4757]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:14:59 crc kubenswrapper[4757]: > pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.290280 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager(7fea9b3d-4277-4a7f-92e6-23c5431051e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager(7fea9b3d-4277-4a7f-92e6-23c5431051e4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager_7fea9b3d-4277-4a7f-92e6-23c5431051e4_0(c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823): error adding pod openshift-controller-manager_controller-manager-85bc4bdcd-5zkz5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823\\\" Netns:\\\"/var/run/netns/4dd49520-bec3-4c7b-9050-01d2bdd53ec6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-85bc4bdcd-5zkz5;K8S_POD_INFRA_CONTAINER_ID=c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823;K8S_POD_UID=7fea9b3d-4277-4a7f-92e6-23c5431051e4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5] networking: Multus: [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5/7fea9b3d-4277-4a7f-92e6-23c5431051e4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85bc4bdcd-5zkz5?timeout=1m0s\\\": dial tcp 38.102.83.219:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" podUID="7fea9b3d-4277-4a7f-92e6-23c5431051e4" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.290833 4757 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event=< Jan 29 15:14:59 crc kubenswrapper[4757]: &Event{ObjectMeta:{controller-manager-85bc4bdcd-5zkz5.188f3c800f4ee78f openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-85bc4bdcd-5zkz5,UID:7fea9b3d-4277-4a7f-92e6-23c5431051e4,APIVersion:v1,ResourceVersion:29822,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager_7fea9b3d-4277-4a7f-92e6-23c5431051e4_0(c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823): error adding pod openshift-controller-manager_controller-manager-85bc4bdcd-5zkz5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823" Netns:"/var/run/netns/4dd49520-bec3-4c7b-9050-01d2bdd53ec6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-85bc4bdcd-5zkz5;K8S_POD_INFRA_CONTAINER_ID=c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823;K8S_POD_UID=7fea9b3d-4277-4a7f-92e6-23c5431051e4" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5] networking: Multus: [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5/7fea9b3d-4277-4a7f-92e6-23c5431051e4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85bc4bdcd-5zkz5?timeout=1m0s": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:14:59 crc kubenswrapper[4757]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:14:59.290204047 +0000 UTC m=+262.579454284,LastTimestamp:2026-01-29 15:14:59.290204047 +0000 UTC m=+262.579454284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 29 15:14:59 crc kubenswrapper[4757]: > Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.352260 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.372923 4757 generic.go:334] "Generic (PLEG): container finished" podID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" containerID="1dd625dcb5615e55e288e16c66873b143a3712038603e318892fcf8d4de00870" exitCode=0 Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.373021 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae","Type":"ContainerDied","Data":"1dd625dcb5615e55e288e16c66873b143a3712038603e318892fcf8d4de00870"} Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.373616 4757 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.373811 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.375998 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.377640 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.378601 4757 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9" exitCode=0 Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.378623 4757 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674" exitCode=0 Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.378636 4757 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf" exitCode=0 Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.378646 4757 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce" exitCode=2 Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.378710 4757 scope.go:117] "RemoveContainer" containerID="1c367b5589d40a105d4c4f8a51cd5a2b3b387a3287f521e8fd965f6bba21ea08" Jan 29 15:14:59 crc kubenswrapper[4757]: W0129 15:14:59.379114 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-de0fd6c34ec7eb7a178e41e9fa895d058db32fed0a267c0075a4037a8cc0a3bd WatchSource:0}: Error finding container de0fd6c34ec7eb7a178e41e9fa895d058db32fed0a267c0075a4037a8cc0a3bd: Status 404 returned error can't find the container with id de0fd6c34ec7eb7a178e41e9fa895d058db32fed0a267c0075a4037a8cc0a3bd Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.382612 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.383111 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.412020 4757 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.412567 4757 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.412866 4757 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.413119 4757 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.413411 4757 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.413465 4757 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.413691 4757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="200ms" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.417139 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69d27087-23a4-497a-8b0e-30397961c886" path="/var/lib/kubelet/pods/69d27087-23a4-497a-8b0e-30397961c886/volumes" Jan 29 15:14:59 crc kubenswrapper[4757]: I0129 15:14:59.417850 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c76747ba-3b36-43df-9263-c56a783ce82f" path="/var/lib/kubelet/pods/c76747ba-3b36-43df-9263-c56a783ce82f/volumes" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.620463 4757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="400ms" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.922655 4757 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 29 15:14:59 crc kubenswrapper[4757]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager_7fea9b3d-4277-4a7f-92e6-23c5431051e4_0(3b97d9e8a007e2bef0fee813f6b44d8bfc58aa601dc9e64d6ed0d5a89e8336e4): error adding pod openshift-controller-manager_controller-manager-85bc4bdcd-5zkz5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3b97d9e8a007e2bef0fee813f6b44d8bfc58aa601dc9e64d6ed0d5a89e8336e4" Netns:"/var/run/netns/e96e6e53-b39e-4c27-b184-08fc777ff2c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-85bc4bdcd-5zkz5;K8S_POD_INFRA_CONTAINER_ID=3b97d9e8a007e2bef0fee813f6b44d8bfc58aa601dc9e64d6ed0d5a89e8336e4;K8S_POD_UID=7fea9b3d-4277-4a7f-92e6-23c5431051e4" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5] networking: Multus: [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5/7fea9b3d-4277-4a7f-92e6-23c5431051e4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85bc4bdcd-5zkz5?timeout=1m0s": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:14:59 crc kubenswrapper[4757]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:14:59 crc kubenswrapper[4757]: > Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.922722 4757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 29 15:14:59 crc kubenswrapper[4757]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager_7fea9b3d-4277-4a7f-92e6-23c5431051e4_0(3b97d9e8a007e2bef0fee813f6b44d8bfc58aa601dc9e64d6ed0d5a89e8336e4): error adding pod openshift-controller-manager_controller-manager-85bc4bdcd-5zkz5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3b97d9e8a007e2bef0fee813f6b44d8bfc58aa601dc9e64d6ed0d5a89e8336e4" Netns:"/var/run/netns/e96e6e53-b39e-4c27-b184-08fc777ff2c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-85bc4bdcd-5zkz5;K8S_POD_INFRA_CONTAINER_ID=3b97d9e8a007e2bef0fee813f6b44d8bfc58aa601dc9e64d6ed0d5a89e8336e4;K8S_POD_UID=7fea9b3d-4277-4a7f-92e6-23c5431051e4" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5] networking: Multus: [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5/7fea9b3d-4277-4a7f-92e6-23c5431051e4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85bc4bdcd-5zkz5?timeout=1m0s": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:14:59 crc kubenswrapper[4757]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:14:59 crc kubenswrapper[4757]: > pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.922741 4757 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 29 15:14:59 crc kubenswrapper[4757]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager_7fea9b3d-4277-4a7f-92e6-23c5431051e4_0(3b97d9e8a007e2bef0fee813f6b44d8bfc58aa601dc9e64d6ed0d5a89e8336e4): error adding pod openshift-controller-manager_controller-manager-85bc4bdcd-5zkz5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3b97d9e8a007e2bef0fee813f6b44d8bfc58aa601dc9e64d6ed0d5a89e8336e4" Netns:"/var/run/netns/e96e6e53-b39e-4c27-b184-08fc777ff2c0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-85bc4bdcd-5zkz5;K8S_POD_INFRA_CONTAINER_ID=3b97d9e8a007e2bef0fee813f6b44d8bfc58aa601dc9e64d6ed0d5a89e8336e4;K8S_POD_UID=7fea9b3d-4277-4a7f-92e6-23c5431051e4" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5] networking: Multus: [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5/7fea9b3d-4277-4a7f-92e6-23c5431051e4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85bc4bdcd-5zkz5?timeout=1m0s": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:14:59 crc kubenswrapper[4757]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 29 15:14:59 crc kubenswrapper[4757]: > pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:14:59 crc kubenswrapper[4757]: E0129 15:14:59.922804 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager(7fea9b3d-4277-4a7f-92e6-23c5431051e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager(7fea9b3d-4277-4a7f-92e6-23c5431051e4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager_7fea9b3d-4277-4a7f-92e6-23c5431051e4_0(3b97d9e8a007e2bef0fee813f6b44d8bfc58aa601dc9e64d6ed0d5a89e8336e4): error adding pod openshift-controller-manager_controller-manager-85bc4bdcd-5zkz5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"3b97d9e8a007e2bef0fee813f6b44d8bfc58aa601dc9e64d6ed0d5a89e8336e4\\\" Netns:\\\"/var/run/netns/e96e6e53-b39e-4c27-b184-08fc777ff2c0\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-85bc4bdcd-5zkz5;K8S_POD_INFRA_CONTAINER_ID=3b97d9e8a007e2bef0fee813f6b44d8bfc58aa601dc9e64d6ed0d5a89e8336e4;K8S_POD_UID=7fea9b3d-4277-4a7f-92e6-23c5431051e4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5] networking: Multus: [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5/7fea9b3d-4277-4a7f-92e6-23c5431051e4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85bc4bdcd-5zkz5?timeout=1m0s\\\": dial tcp 38.102.83.219:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" podUID="7fea9b3d-4277-4a7f-92e6-23c5431051e4" Jan 29 15:15:00 crc kubenswrapper[4757]: E0129 15:15:00.022015 4757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="800ms" Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.389864 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.392671 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"84cea0adb2352dc8deeaa3d313d1470f144e1db6913b7e8127a63bc54a2ea988"} Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.392707 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"de0fd6c34ec7eb7a178e41e9fa895d058db32fed0a267c0075a4037a8cc0a3bd"} Jan 29 15:15:00 crc kubenswrapper[4757]: E0129 15:15:00.393595 4757 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.393845 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.634310 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.635296 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.796948 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-var-lock\") pod \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\" (UID: \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\") " Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.797013 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-kube-api-access\") pod \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\" (UID: \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\") " Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.797054 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-kubelet-dir\") pod \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\" (UID: \"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae\") " Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.797229 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-var-lock" (OuterVolumeSpecName: "var-lock") pod "c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" (UID: "c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.797319 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" (UID: "c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.804167 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" (UID: "c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:15:00 crc kubenswrapper[4757]: E0129 15:15:00.823077 4757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="1.6s" Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.909018 4757 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.909057 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:00 crc kubenswrapper[4757]: I0129 15:15:00.909078 4757 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:01 crc kubenswrapper[4757]: E0129 15:15:01.012188 4757 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event=< Jan 29 15:15:01 crc kubenswrapper[4757]: &Event{ObjectMeta:{controller-manager-85bc4bdcd-5zkz5.188f3c800f4ee78f openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-85bc4bdcd-5zkz5,UID:7fea9b3d-4277-4a7f-92e6-23c5431051e4,APIVersion:v1,ResourceVersion:29822,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager_7fea9b3d-4277-4a7f-92e6-23c5431051e4_0(c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823): error adding pod openshift-controller-manager_controller-manager-85bc4bdcd-5zkz5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823" Netns:"/var/run/netns/4dd49520-bec3-4c7b-9050-01d2bdd53ec6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-85bc4bdcd-5zkz5;K8S_POD_INFRA_CONTAINER_ID=c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823;K8S_POD_UID=7fea9b3d-4277-4a7f-92e6-23c5431051e4" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5] networking: Multus: [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5/7fea9b3d-4277-4a7f-92e6-23c5431051e4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85bc4bdcd-5zkz5?timeout=1m0s": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:15:01 crc kubenswrapper[4757]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:14:59.290204047 +0000 UTC m=+262.579454284,LastTimestamp:2026-01-29 15:14:59.290204047 +0000 UTC m=+262.579454284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 29 15:15:01 crc kubenswrapper[4757]: > Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.372299 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.373502 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.374459 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.374931 4757 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.401589 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.402404 4757 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978" exitCode=0 Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.402524 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.404247 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.409962 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae","Type":"ContainerDied","Data":"a2ac5578999ebfded166ff69fc1e3fe914ebdfeb39fc12cfecda3323a502d7e0"} Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.410192 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ac5578999ebfded166ff69fc1e3fe914ebdfeb39fc12cfecda3323a502d7e0" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.410059 4757 scope.go:117] "RemoveContainer" containerID="3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.414146 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.414318 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.414342 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.414527 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.414694 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.414396 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.415086 4757 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.415111 4757 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.421631 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.426761 4757 scope.go:117] "RemoveContainer" containerID="d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.438322 4757 scope.go:117] "RemoveContainer" containerID="587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.452241 4757 scope.go:117] "RemoveContainer" containerID="c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.464547 4757 scope.go:117] "RemoveContainer" containerID="7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.478908 4757 scope.go:117] "RemoveContainer" containerID="ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.498081 4757 scope.go:117] "RemoveContainer" containerID="3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9" Jan 29 15:15:01 crc kubenswrapper[4757]: E0129 15:15:01.498530 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\": container with ID starting with 3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9 not found: ID does not exist" containerID="3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.498558 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9"} err="failed to get container status \"3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\": rpc error: code = NotFound desc = could not find container \"3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9\": container with ID starting with 3412536b3e56824e5591481586ad2a7b78e3cab13dcb63aad36e544ec281b3b9 not found: ID does not exist" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.498584 4757 scope.go:117] "RemoveContainer" containerID="d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674" Jan 29 15:15:01 crc kubenswrapper[4757]: E0129 15:15:01.499008 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\": container with ID starting with d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674 not found: ID does not exist" containerID="d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.499042 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674"} err="failed to get container status \"d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\": rpc error: code = NotFound desc = could not find container \"d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674\": container with ID starting with d4b8d5b6df5f06a81733cc04d4946f312758328083db0c74a4b9de2483d80674 not found: ID does not exist" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.499063 4757 scope.go:117] "RemoveContainer" containerID="587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf" Jan 29 15:15:01 crc kubenswrapper[4757]: E0129 15:15:01.499454 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\": container with ID starting with 587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf not found: ID does not exist" containerID="587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.499589 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf"} err="failed to get container status \"587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\": rpc error: code = NotFound desc = could not find container \"587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf\": container with ID starting with 587f15181744b84cce3998ba4e110baf3d375263285e4499ad78826db56aabbf not found: ID does not exist" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.499673 4757 scope.go:117] "RemoveContainer" containerID="c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce" Jan 29 15:15:01 crc kubenswrapper[4757]: E0129 15:15:01.500225 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\": container with ID starting with c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce not found: ID does not exist" containerID="c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.500253 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce"} err="failed to get container status \"c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\": rpc error: code = NotFound desc = could not find container \"c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce\": container with ID starting with c350513cd0c2b00b4c0d6c4b1fff66d63520fe397331662deaab661f413c09ce not found: ID does not exist" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.500289 4757 scope.go:117] "RemoveContainer" containerID="7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978" Jan 29 15:15:01 crc kubenswrapper[4757]: E0129 15:15:01.500548 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\": container with ID starting with 7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978 not found: ID does not exist" containerID="7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.500577 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978"} err="failed to get container status \"7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\": rpc error: code = NotFound desc = could not find container \"7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978\": container with ID starting with 7e25fb5358a05272726ad1e63fb70c571f5b9b72652daada9862f8f114a8b978 not found: ID does not exist" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.500614 4757 scope.go:117] "RemoveContainer" containerID="ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9" Jan 29 15:15:01 crc kubenswrapper[4757]: E0129 15:15:01.500855 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\": container with ID starting with ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9 not found: ID does not exist" containerID="ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.500879 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9"} err="failed to get container status \"ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\": rpc error: code = NotFound desc = could not find container \"ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9\": container with ID starting with ee21d8dbb4c84742684d64010651d56a1565da454202f0745a82bd36b3a258b9 not found: ID does not exist" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.549660 4757 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.716219 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:01 crc kubenswrapper[4757]: I0129 15:15:01.716690 4757 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:02 crc kubenswrapper[4757]: I0129 15:15:02.396351 4757 status_manager.go:851] "Failed to get status for pod" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" pod="openshift-marketplace/community-operators-pxw6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-pxw6w\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:02 crc kubenswrapper[4757]: I0129 15:15:02.397056 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:02 crc kubenswrapper[4757]: I0129 15:15:02.397379 4757 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:02 crc kubenswrapper[4757]: E0129 15:15:02.423642 4757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="3.2s" Jan 29 15:15:02 crc kubenswrapper[4757]: E0129 15:15:02.522725 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:15:02 crc kubenswrapper[4757]: E0129 15:15:02.522860 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjf2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pxw6w_openshift-marketplace(fd7070d7-3870-49f1-8976-094ad97b6efc): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:15:02 crc kubenswrapper[4757]: E0129 15:15:02.524022 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:15:03 crc kubenswrapper[4757]: I0129 15:15:03.397450 4757 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:03 crc kubenswrapper[4757]: I0129 15:15:03.397774 4757 status_manager.go:851] "Failed to get status for pod" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" pod="openshift-marketplace/community-operators-pxw6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-pxw6w\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:03 crc kubenswrapper[4757]: I0129 15:15:03.398033 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:03 crc kubenswrapper[4757]: I0129 15:15:03.398401 4757 status_manager.go:851] "Failed to get status for pod" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" pod="openshift-marketplace/community-operators-c5pw7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c5pw7\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:03 crc kubenswrapper[4757]: I0129 15:15:03.407866 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 15:15:03 crc kubenswrapper[4757]: E0129 15:15:03.802183 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:15:03 crc kubenswrapper[4757]: E0129 15:15:03.802699 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8p6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c5pw7_openshift-marketplace(4e10b6b9-259a-417c-ba5d-311e75543637): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:15:03 crc kubenswrapper[4757]: E0129 15:15:03.803908 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:15:04 crc kubenswrapper[4757]: I0129 15:15:04.396335 4757 status_manager.go:851] "Failed to get status for pod" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" pod="openshift-marketplace/certified-operators-2jc8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2jc8z\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:04 crc kubenswrapper[4757]: I0129 15:15:04.396579 4757 status_manager.go:851] "Failed to get status for pod" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" pod="openshift-marketplace/community-operators-pxw6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-pxw6w\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:04 crc kubenswrapper[4757]: I0129 15:15:04.396785 4757 status_manager.go:851] "Failed to get status for pod" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" pod="openshift-marketplace/community-operators-c5pw7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c5pw7\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:04 crc kubenswrapper[4757]: I0129 15:15:04.397072 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:04 crc kubenswrapper[4757]: E0129 15:15:04.513137 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:15:04 crc kubenswrapper[4757]: E0129 15:15:04.513336 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm2n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2jc8z_openshift-marketplace(43de85f7-11df-4e6f-8d3f-b982b03ce802): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:15:04 crc kubenswrapper[4757]: E0129 15:15:04.514556 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:15:05 crc kubenswrapper[4757]: I0129 15:15:05.397070 4757 status_manager.go:851] "Failed to get status for pod" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" pod="openshift-marketplace/community-operators-pxw6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-pxw6w\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:05 crc kubenswrapper[4757]: I0129 15:15:05.398326 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:05 crc kubenswrapper[4757]: I0129 15:15:05.398625 4757 status_manager.go:851] "Failed to get status for pod" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" pod="openshift-marketplace/community-operators-c5pw7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c5pw7\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:05 crc kubenswrapper[4757]: E0129 15:15:05.398668 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:15:05 crc kubenswrapper[4757]: I0129 15:15:05.399391 4757 status_manager.go:851] "Failed to get status for pod" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" pod="openshift-marketplace/redhat-operators-99p4m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-99p4m\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:05 crc kubenswrapper[4757]: I0129 15:15:05.399859 4757 status_manager.go:851] "Failed to get status for pod" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" pod="openshift-marketplace/certified-operators-2jc8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2jc8z\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:05 crc kubenswrapper[4757]: E0129 15:15:05.625331 4757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="6.4s" Jan 29 15:15:07 crc kubenswrapper[4757]: I0129 15:15:07.399590 4757 status_manager.go:851] "Failed to get status for pod" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" pod="openshift-marketplace/community-operators-pxw6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-pxw6w\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:07 crc kubenswrapper[4757]: I0129 15:15:07.399940 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:07 crc kubenswrapper[4757]: I0129 15:15:07.400124 4757 status_manager.go:851] "Failed to get status for pod" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" pod="openshift-marketplace/community-operators-c5pw7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c5pw7\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:07 crc kubenswrapper[4757]: I0129 15:15:07.400327 4757 status_manager.go:851] "Failed to get status for pod" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" pod="openshift-marketplace/redhat-operators-99p4m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-99p4m\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:07 crc kubenswrapper[4757]: I0129 15:15:07.400512 4757 status_manager.go:851] "Failed to get status for pod" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" pod="openshift-marketplace/certified-operators-2jc8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2jc8z\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:07 crc kubenswrapper[4757]: E0129 15:15:07.400812 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:15:07 crc kubenswrapper[4757]: I0129 15:15:07.400845 4757 status_manager.go:851] "Failed to get status for pod" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" pod="openshift-marketplace/certified-operators-57qth" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-57qth\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:07 crc kubenswrapper[4757]: I0129 15:15:07.401248 4757 status_manager.go:851] "Failed to get status for pod" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" pod="openshift-marketplace/community-operators-pxw6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-pxw6w\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:07 crc kubenswrapper[4757]: I0129 15:15:07.401602 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:07 crc kubenswrapper[4757]: I0129 15:15:07.401783 4757 status_manager.go:851] "Failed to get status for pod" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" pod="openshift-marketplace/community-operators-c5pw7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c5pw7\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:07 crc kubenswrapper[4757]: I0129 15:15:07.402207 4757 status_manager.go:851] "Failed to get status for pod" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" pod="openshift-marketplace/redhat-operators-99p4m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-99p4m\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:07 crc kubenswrapper[4757]: I0129 15:15:07.402775 4757 status_manager.go:851] "Failed to get status for pod" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" pod="openshift-marketplace/certified-operators-2jc8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2jc8z\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:09 crc kubenswrapper[4757]: I0129 15:15:09.396718 4757 status_manager.go:851] "Failed to get status for pod" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" pod="openshift-marketplace/community-operators-pxw6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-pxw6w\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:09 crc kubenswrapper[4757]: E0129 15:15:09.399143 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:15:09 crc kubenswrapper[4757]: I0129 15:15:09.399138 4757 status_manager.go:851] "Failed to get status for pod" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" pod="openshift-marketplace/certified-operators-57qth" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-57qth\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:09 crc kubenswrapper[4757]: I0129 15:15:09.399588 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:09 crc kubenswrapper[4757]: I0129 15:15:09.400001 4757 status_manager.go:851] "Failed to get status for pod" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" pod="openshift-marketplace/community-operators-c5pw7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c5pw7\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:09 crc kubenswrapper[4757]: I0129 15:15:09.400431 4757 status_manager.go:851] "Failed to get status for pod" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" pod="openshift-marketplace/redhat-operators-v8v75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v8v75\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:09 crc kubenswrapper[4757]: I0129 15:15:09.400870 4757 status_manager.go:851] "Failed to get status for pod" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" pod="openshift-marketplace/redhat-operators-99p4m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-99p4m\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:09 crc kubenswrapper[4757]: I0129 15:15:09.401161 4757 status_manager.go:851] "Failed to get status for pod" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" pod="openshift-marketplace/certified-operators-2jc8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2jc8z\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.395941 4757 status_manager.go:851] "Failed to get status for pod" podUID="f2342b27-9060-4697-a957-65d07f099e82" pod="openshift-marketplace/redhat-marketplace-btp4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-btp4k\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.396477 4757 status_manager.go:851] "Failed to get status for pod" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" pod="openshift-marketplace/redhat-operators-v8v75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v8v75\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.396875 4757 status_manager.go:851] "Failed to get status for pod" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" pod="openshift-marketplace/redhat-operators-99p4m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-99p4m\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.397182 4757 status_manager.go:851] "Failed to get status for pod" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" pod="openshift-marketplace/certified-operators-2jc8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2jc8z\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.397481 4757 status_manager.go:851] "Failed to get status for pod" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" pod="openshift-marketplace/certified-operators-57qth" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-57qth\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.397770 4757 status_manager.go:851] "Failed to get status for pod" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" pod="openshift-marketplace/community-operators-pxw6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-pxw6w\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.398071 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.398854 4757 status_manager.go:851] "Failed to get status for pod" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" pod="openshift-marketplace/community-operators-c5pw7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c5pw7\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: E0129 15:15:10.398882 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:15:10 crc kubenswrapper[4757]: E0129 15:15:10.399001 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.399502 4757 status_manager.go:851] "Failed to get status for pod" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" pod="openshift-marketplace/redhat-operators-99p4m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-99p4m\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.399752 4757 status_manager.go:851] "Failed to get status for pod" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" pod="openshift-marketplace/certified-operators-2jc8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2jc8z\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.399967 4757 status_manager.go:851] "Failed to get status for pod" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" pod="openshift-marketplace/community-operators-pxw6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-pxw6w\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.400198 4757 status_manager.go:851] "Failed to get status for pod" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" pod="openshift-marketplace/certified-operators-57qth" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-57qth\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.400469 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.400678 4757 status_manager.go:851] "Failed to get status for pod" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" pod="openshift-marketplace/community-operators-c5pw7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c5pw7\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.400865 4757 status_manager.go:851] "Failed to get status for pod" podUID="92724a14-21db-441f-b509-142dc0a8dc15" pod="openshift-marketplace/redhat-marketplace-jhlrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jhlrf\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.401093 4757 status_manager.go:851] "Failed to get status for pod" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" pod="openshift-marketplace/redhat-operators-v8v75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v8v75\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:10 crc kubenswrapper[4757]: I0129 15:15:10.401317 4757 status_manager.go:851] "Failed to get status for pod" podUID="f2342b27-9060-4697-a957-65d07f099e82" pod="openshift-marketplace/redhat-marketplace-btp4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-btp4k\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:11 crc kubenswrapper[4757]: E0129 15:15:11.013308 4757 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/events\": dial tcp 38.102.83.219:6443: connect: connection refused" event=< Jan 29 15:15:11 crc kubenswrapper[4757]: &Event{ObjectMeta:{controller-manager-85bc4bdcd-5zkz5.188f3c800f4ee78f openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-85bc4bdcd-5zkz5,UID:7fea9b3d-4277-4a7f-92e6-23c5431051e4,APIVersion:v1,ResourceVersion:29822,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-85bc4bdcd-5zkz5_openshift-controller-manager_7fea9b3d-4277-4a7f-92e6-23c5431051e4_0(c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823): error adding pod openshift-controller-manager_controller-manager-85bc4bdcd-5zkz5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823" Netns:"/var/run/netns/4dd49520-bec3-4c7b-9050-01d2bdd53ec6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-85bc4bdcd-5zkz5;K8S_POD_INFRA_CONTAINER_ID=c2ffcd1ca968036183af63d383217dec7c8194ea95fb713f5c21a144839c9823;K8S_POD_UID=7fea9b3d-4277-4a7f-92e6-23c5431051e4" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5] networking: Multus: [openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5/7fea9b3d-4277-4a7f-92e6-23c5431051e4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-85bc4bdcd-5zkz5 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-85bc4bdcd-5zkz5?timeout=1m0s": dial tcp 38.102.83.219:6443: connect: connection refused Jan 29 15:15:11 crc kubenswrapper[4757]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:14:59.290204047 +0000 UTC m=+262.579454284,LastTimestamp:2026-01-29 15:14:59.290204047 +0000 UTC m=+262.579454284,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 29 15:15:11 crc kubenswrapper[4757]: > Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.396698 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.397711 4757 status_manager.go:851] "Failed to get status for pod" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" pod="openshift-marketplace/community-operators-pxw6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-pxw6w\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.398086 4757 status_manager.go:851] "Failed to get status for pod" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" pod="openshift-marketplace/certified-operators-57qth" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-57qth\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.398641 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.398950 4757 status_manager.go:851] "Failed to get status for pod" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" pod="openshift-marketplace/community-operators-c5pw7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c5pw7\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.399148 4757 status_manager.go:851] "Failed to get status for pod" podUID="92724a14-21db-441f-b509-142dc0a8dc15" pod="openshift-marketplace/redhat-marketplace-jhlrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jhlrf\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.399406 4757 status_manager.go:851] "Failed to get status for pod" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" pod="openshift-marketplace/redhat-operators-v8v75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v8v75\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.399657 4757 status_manager.go:851] "Failed to get status for pod" podUID="f2342b27-9060-4697-a957-65d07f099e82" pod="openshift-marketplace/redhat-marketplace-btp4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-btp4k\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.400505 4757 status_manager.go:851] "Failed to get status for pod" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" pod="openshift-marketplace/redhat-operators-99p4m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-99p4m\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.408524 4757 status_manager.go:851] "Failed to get status for pod" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" pod="openshift-marketplace/certified-operators-2jc8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2jc8z\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.422096 4757 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="341afdcd-2c99-472f-9792-0ddd254aeab2" Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.422142 4757 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="341afdcd-2c99-472f-9792-0ddd254aeab2" Jan 29 15:15:11 crc kubenswrapper[4757]: E0129 15:15:11.422733 4757 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.423415 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:15:11 crc kubenswrapper[4757]: E0129 15:15:11.448069 4757 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" volumeName="registry-storage" Jan 29 15:15:11 crc kubenswrapper[4757]: I0129 15:15:11.455973 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bb4c389a3efa3673cce4d73655bf3cb8ee15df57e8a13f12a58a83e541f378df"} Jan 29 15:15:12 crc kubenswrapper[4757]: E0129 15:15:12.026179 4757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.219:6443: connect: connection refused" interval="7s" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.462799 4757 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="fa56ca796a91a8ea5e16ad9c310dea8eaf8a35e94f3d96a25698f0b0ddc27df8" exitCode=0 Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.462897 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"fa56ca796a91a8ea5e16ad9c310dea8eaf8a35e94f3d96a25698f0b0ddc27df8"} Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.463154 4757 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="341afdcd-2c99-472f-9792-0ddd254aeab2" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.463188 4757 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="341afdcd-2c99-472f-9792-0ddd254aeab2" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.463613 4757 status_manager.go:851] "Failed to get status for pod" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" pod="openshift-marketplace/certified-operators-2jc8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2jc8z\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: E0129 15:15:12.463645 4757 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.464088 4757 status_manager.go:851] "Failed to get status for pod" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" pod="openshift-marketplace/certified-operators-57qth" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-57qth\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.464333 4757 status_manager.go:851] "Failed to get status for pod" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" pod="openshift-marketplace/community-operators-pxw6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-pxw6w\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.464544 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.464822 4757 status_manager.go:851] "Failed to get status for pod" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" pod="openshift-marketplace/community-operators-c5pw7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c5pw7\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.465163 4757 status_manager.go:851] "Failed to get status for pod" podUID="92724a14-21db-441f-b509-142dc0a8dc15" pod="openshift-marketplace/redhat-marketplace-jhlrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jhlrf\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.465416 4757 status_manager.go:851] "Failed to get status for pod" podUID="f2342b27-9060-4697-a957-65d07f099e82" pod="openshift-marketplace/redhat-marketplace-btp4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-btp4k\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.465690 4757 status_manager.go:851] "Failed to get status for pod" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" pod="openshift-marketplace/redhat-operators-v8v75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v8v75\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.466498 4757 status_manager.go:851] "Failed to get status for pod" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" pod="openshift-marketplace/redhat-operators-99p4m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-99p4m\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.467304 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.467370 4757 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121" exitCode=1 Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.467408 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121"} Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.467842 4757 scope.go:117] "RemoveContainer" containerID="73ec56aef61a68bbf17195bdd2e299c65d88a443daf216c758d78c958513d121" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.468098 4757 status_manager.go:851] "Failed to get status for pod" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" pod="openshift-marketplace/redhat-operators-99p4m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-99p4m\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.468334 4757 status_manager.go:851] "Failed to get status for pod" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" pod="openshift-marketplace/certified-operators-2jc8z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2jc8z\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.468552 4757 status_manager.go:851] "Failed to get status for pod" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" pod="openshift-marketplace/certified-operators-57qth" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-57qth\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.469443 4757 status_manager.go:851] "Failed to get status for pod" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" pod="openshift-marketplace/community-operators-pxw6w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-pxw6w\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.469660 4757 status_manager.go:851] "Failed to get status for pod" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" pod="openshift-marketplace/community-operators-c5pw7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-c5pw7\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.469937 4757 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.470250 4757 status_manager.go:851] "Failed to get status for pod" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.470537 4757 status_manager.go:851] "Failed to get status for pod" podUID="92724a14-21db-441f-b509-142dc0a8dc15" pod="openshift-marketplace/redhat-marketplace-jhlrf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jhlrf\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.470713 4757 status_manager.go:851] "Failed to get status for pod" podUID="f2342b27-9060-4697-a957-65d07f099e82" pod="openshift-marketplace/redhat-marketplace-btp4k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-btp4k\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:12 crc kubenswrapper[4757]: I0129 15:15:12.470942 4757 status_manager.go:851] "Failed to get status for pod" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" pod="openshift-marketplace/redhat-operators-v8v75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-v8v75\": dial tcp 38.102.83.219:6443: connect: connection refused" Jan 29 15:15:13 crc kubenswrapper[4757]: I0129 15:15:13.482875 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8b408e25e336773d549ec35cbba179ce70d2bee350c5ad47637d23a1d8807c8f"} Jan 29 15:15:13 crc kubenswrapper[4757]: I0129 15:15:13.483178 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cfc118c8de1d204f8887c39eae7ce087314049a6eb7dddc3baa81643d1410c97"} Jan 29 15:15:13 crc kubenswrapper[4757]: I0129 15:15:13.483190 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"74bdb74e11bb8e5fbd9dcf9e96f69228ce2a9bf6a543ccb7ecb66f671e61a149"} Jan 29 15:15:13 crc kubenswrapper[4757]: I0129 15:15:13.483198 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8f1cb12d3869fd31984069d54305f76e8afcab140ca108ded8bcd24543086725"} Jan 29 15:15:13 crc kubenswrapper[4757]: I0129 15:15:13.491864 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 15:15:13 crc kubenswrapper[4757]: I0129 15:15:13.491924 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ddf8d4e27c65a4ee3f072eb427caa42a4f67716559f0fdbb142dd409eb2f1816"} Jan 29 15:15:13 crc kubenswrapper[4757]: I0129 15:15:13.800332 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" podUID="e9d54611-82e4-4698-b654-62a1d7144225" containerName="oauth-openshift" containerID="cri-o://fa050baf64540fd87207c12d8c741141a192ddc36a36d1622a0c24bb548c888e" gracePeriod=15 Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.498704 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" event={"ID":"e9d54611-82e4-4698-b654-62a1d7144225","Type":"ContainerDied","Data":"fa050baf64540fd87207c12d8c741141a192ddc36a36d1622a0c24bb548c888e"} Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.498702 4757 generic.go:334] "Generic (PLEG): container finished" podID="e9d54611-82e4-4698-b654-62a1d7144225" containerID="fa050baf64540fd87207c12d8c741141a192ddc36a36d1622a0c24bb548c888e" exitCode=0 Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.502718 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"38fc45ad45b33125d63160ed4704dd4b79c6d9c0968cad3c32f288b0a26e5de0"} Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.503239 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.503197 4757 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="341afdcd-2c99-472f-9792-0ddd254aeab2" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.503455 4757 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="341afdcd-2c99-472f-9792-0ddd254aeab2" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.697991 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818241 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmsld\" (UniqueName: \"kubernetes.io/projected/e9d54611-82e4-4698-b654-62a1d7144225-kube-api-access-zmsld\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818308 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-router-certs\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818340 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-error\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818369 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-trusted-ca-bundle\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818400 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-provider-selection\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818448 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-serving-cert\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818470 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-audit-policies\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818525 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-ocp-branding-template\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818557 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-service-ca\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818613 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-idp-0-file-data\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818647 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-session\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818669 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9d54611-82e4-4698-b654-62a1d7144225-audit-dir\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818688 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-cliconfig\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.818718 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-login\") pod \"e9d54611-82e4-4698-b654-62a1d7144225\" (UID: \"e9d54611-82e4-4698-b654-62a1d7144225\") " Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.820519 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.824841 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.824935 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9d54611-82e4-4698-b654-62a1d7144225-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.825972 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.826540 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.826963 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.827307 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.827434 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.827462 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.828632 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.831451 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d54611-82e4-4698-b654-62a1d7144225-kube-api-access-zmsld" (OuterVolumeSpecName: "kube-api-access-zmsld") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "kube-api-access-zmsld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.832639 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.833512 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.834968 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e9d54611-82e4-4698-b654-62a1d7144225" (UID: "e9d54611-82e4-4698-b654-62a1d7144225"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920653 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920689 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920701 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920711 4757 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9d54611-82e4-4698-b654-62a1d7144225-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920720 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920729 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920738 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmsld\" (UniqueName: \"kubernetes.io/projected/e9d54611-82e4-4698-b654-62a1d7144225-kube-api-access-zmsld\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920746 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920754 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920795 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920816 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920827 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920836 4757 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9d54611-82e4-4698-b654-62a1d7144225-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:14 crc kubenswrapper[4757]: I0129 15:15:14.920844 4757 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9d54611-82e4-4698-b654-62a1d7144225-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:15 crc kubenswrapper[4757]: I0129 15:15:15.395813 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:15:15 crc kubenswrapper[4757]: I0129 15:15:15.396290 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:15:15 crc kubenswrapper[4757]: E0129 15:15:15.397672 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:15:15 crc kubenswrapper[4757]: I0129 15:15:15.510701 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" event={"ID":"e9d54611-82e4-4698-b654-62a1d7144225","Type":"ContainerDied","Data":"ba612900960cb53da5206d5304efd39af924b77d265268e99e6a5c7b3990902b"} Jan 29 15:15:15 crc kubenswrapper[4757]: I0129 15:15:15.510984 4757 scope.go:117] "RemoveContainer" containerID="fa050baf64540fd87207c12d8c741141a192ddc36a36d1622a0c24bb548c888e" Jan 29 15:15:15 crc kubenswrapper[4757]: I0129 15:15:15.510776 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mg555" Jan 29 15:15:15 crc kubenswrapper[4757]: W0129 15:15:15.819500 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fea9b3d_4277_4a7f_92e6_23c5431051e4.slice/crio-fb2fb4acd8015d5f2bdaff348ce61c9fb2426988d437590c1a0560190d5e0986 WatchSource:0}: Error finding container fb2fb4acd8015d5f2bdaff348ce61c9fb2426988d437590c1a0560190d5e0986: Status 404 returned error can't find the container with id fb2fb4acd8015d5f2bdaff348ce61c9fb2426988d437590c1a0560190d5e0986 Jan 29 15:15:16 crc kubenswrapper[4757]: E0129 15:15:16.397940 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:15:16 crc kubenswrapper[4757]: I0129 15:15:16.424580 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:15:16 crc kubenswrapper[4757]: I0129 15:15:16.424633 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:15:16 crc kubenswrapper[4757]: I0129 15:15:16.431985 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:15:16 crc kubenswrapper[4757]: I0129 15:15:16.521650 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" event={"ID":"7fea9b3d-4277-4a7f-92e6-23c5431051e4","Type":"ContainerStarted","Data":"9fe6fd9260cc4c532c528ffd70cf74beff48b61dbd2f1a53ef74ca7d0ac89e1d"} Jan 29 15:15:16 crc kubenswrapper[4757]: I0129 15:15:16.521702 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" event={"ID":"7fea9b3d-4277-4a7f-92e6-23c5431051e4","Type":"ContainerStarted","Data":"fb2fb4acd8015d5f2bdaff348ce61c9fb2426988d437590c1a0560190d5e0986"} Jan 29 15:15:16 crc kubenswrapper[4757]: I0129 15:15:16.522376 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:15:16 crc kubenswrapper[4757]: I0129 15:15:16.529232 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:15:17 crc kubenswrapper[4757]: E0129 15:15:17.398418 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:15:17 crc kubenswrapper[4757]: I0129 15:15:17.484062 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:15:17 crc kubenswrapper[4757]: I0129 15:15:17.491207 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:15:17 crc kubenswrapper[4757]: I0129 15:15:17.527793 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:15:19 crc kubenswrapper[4757]: E0129 15:15:19.398902 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:15:19 crc kubenswrapper[4757]: I0129 15:15:19.511672 4757 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:15:19 crc kubenswrapper[4757]: I0129 15:15:19.539446 4757 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="341afdcd-2c99-472f-9792-0ddd254aeab2" Jan 29 15:15:19 crc kubenswrapper[4757]: I0129 15:15:19.539691 4757 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="341afdcd-2c99-472f-9792-0ddd254aeab2" Jan 29 15:15:19 crc kubenswrapper[4757]: I0129 15:15:19.544702 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:15:19 crc kubenswrapper[4757]: I0129 15:15:19.616339 4757 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="70d9c4d8-8a27-4594-ad16-f9adcfd88459" Jan 29 15:15:20 crc kubenswrapper[4757]: I0129 15:15:20.543021 4757 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="341afdcd-2c99-472f-9792-0ddd254aeab2" Jan 29 15:15:20 crc kubenswrapper[4757]: I0129 15:15:20.543530 4757 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="341afdcd-2c99-472f-9792-0ddd254aeab2" Jan 29 15:15:20 crc kubenswrapper[4757]: I0129 15:15:20.546867 4757 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="70d9c4d8-8a27-4594-ad16-f9adcfd88459" Jan 29 15:15:21 crc kubenswrapper[4757]: E0129 15:15:21.396885 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:15:21 crc kubenswrapper[4757]: E0129 15:15:21.396907 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:15:22 crc kubenswrapper[4757]: E0129 15:15:22.399127 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:15:23 crc kubenswrapper[4757]: E0129 15:15:23.398725 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:15:27 crc kubenswrapper[4757]: E0129 15:15:27.397525 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:15:29 crc kubenswrapper[4757]: I0129 15:15:29.193335 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 15:15:29 crc kubenswrapper[4757]: I0129 15:15:29.339800 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 15:15:29 crc kubenswrapper[4757]: E0129 15:15:29.397490 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:15:29 crc kubenswrapper[4757]: I0129 15:15:29.712619 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 15:15:29 crc kubenswrapper[4757]: I0129 15:15:29.934999 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 15:15:30 crc kubenswrapper[4757]: I0129 15:15:30.360894 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 15:15:30 crc kubenswrapper[4757]: I0129 15:15:30.384650 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 15:15:30 crc kubenswrapper[4757]: I0129 15:15:30.499320 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 15:15:30 crc kubenswrapper[4757]: I0129 15:15:30.616681 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 15:15:30 crc kubenswrapper[4757]: I0129 15:15:30.700566 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:15:31 crc kubenswrapper[4757]: I0129 15:15:31.101880 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 15:15:31 crc kubenswrapper[4757]: I0129 15:15:31.293871 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 15:15:31 crc kubenswrapper[4757]: I0129 15:15:31.297574 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 15:15:31 crc kubenswrapper[4757]: I0129 15:15:31.376923 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 15:15:31 crc kubenswrapper[4757]: E0129 15:15:31.397463 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:15:31 crc kubenswrapper[4757]: I0129 15:15:31.686044 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 15:15:31 crc kubenswrapper[4757]: I0129 15:15:31.812045 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 15:15:31 crc kubenswrapper[4757]: I0129 15:15:31.869117 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:15:32 crc kubenswrapper[4757]: I0129 15:15:32.049876 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 15:15:32 crc kubenswrapper[4757]: E0129 15:15:32.400560 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:15:32 crc kubenswrapper[4757]: I0129 15:15:32.475744 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:15:32 crc kubenswrapper[4757]: I0129 15:15:32.572506 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 15:15:32 crc kubenswrapper[4757]: I0129 15:15:32.606173 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 15:15:32 crc kubenswrapper[4757]: I0129 15:15:32.606228 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 15:15:32 crc kubenswrapper[4757]: I0129 15:15:32.897098 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 15:15:32 crc kubenswrapper[4757]: I0129 15:15:32.906300 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 15:15:33 crc kubenswrapper[4757]: I0129 15:15:33.007422 4757 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 15:15:33 crc kubenswrapper[4757]: I0129 15:15:33.316574 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 15:15:33 crc kubenswrapper[4757]: I0129 15:15:33.445183 4757 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 15:15:33 crc kubenswrapper[4757]: I0129 15:15:33.490678 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 15:15:33 crc kubenswrapper[4757]: E0129 15:15:33.543821 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:15:33 crc kubenswrapper[4757]: E0129 15:15:33.544007 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr9fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-99p4m_openshift-marketplace(6f40510d-f93a-4a84-ad4a-e503fa0bdf09): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:15:33 crc kubenswrapper[4757]: E0129 15:15:33.545465 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:15:33 crc kubenswrapper[4757]: I0129 15:15:33.568179 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 15:15:33 crc kubenswrapper[4757]: I0129 15:15:33.607836 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 15:15:33 crc kubenswrapper[4757]: I0129 15:15:33.631229 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 15:15:33 crc kubenswrapper[4757]: I0129 15:15:33.637815 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 15:15:33 crc kubenswrapper[4757]: I0129 15:15:33.715773 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:15:33 crc kubenswrapper[4757]: I0129 15:15:33.738852 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 15:15:33 crc kubenswrapper[4757]: I0129 15:15:33.842228 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 15:15:33 crc kubenswrapper[4757]: I0129 15:15:33.969046 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:15:33 crc kubenswrapper[4757]: I0129 15:15:33.995472 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.007592 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.020556 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.027564 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.028243 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.092708 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.182230 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.237946 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.322187 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.384693 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 15:15:34 crc kubenswrapper[4757]: E0129 15:15:34.397878 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.467112 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.532438 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.633329 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.662558 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.679212 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.689467 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.699902 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.751206 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.809652 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.934015 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.983763 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 15:15:34 crc kubenswrapper[4757]: I0129 15:15:34.992742 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.000704 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.038407 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.047804 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.081957 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.145908 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.300341 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.360510 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 15:15:35 crc kubenswrapper[4757]: E0129 15:15:35.398148 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.416135 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.465042 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.510817 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.544175 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.580505 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.585339 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.764840 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.839218 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.895942 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.912298 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.918871 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 15:15:35 crc kubenswrapper[4757]: I0129 15:15:35.991355 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 15:15:36 crc kubenswrapper[4757]: I0129 15:15:36.045872 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 15:15:36 crc kubenswrapper[4757]: I0129 15:15:36.047128 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 15:15:36 crc kubenswrapper[4757]: I0129 15:15:36.116020 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 15:15:36 crc kubenswrapper[4757]: I0129 15:15:36.195575 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 15:15:36 crc kubenswrapper[4757]: I0129 15:15:36.264378 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 15:15:36 crc kubenswrapper[4757]: I0129 15:15:36.361298 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 15:15:36 crc kubenswrapper[4757]: I0129 15:15:36.656260 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 15:15:36 crc kubenswrapper[4757]: I0129 15:15:36.701030 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 15:15:36 crc kubenswrapper[4757]: I0129 15:15:36.760948 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 15:15:36 crc kubenswrapper[4757]: I0129 15:15:36.829357 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 15:15:36 crc kubenswrapper[4757]: I0129 15:15:36.857880 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 15:15:36 crc kubenswrapper[4757]: I0129 15:15:36.959489 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 15:15:36 crc kubenswrapper[4757]: I0129 15:15:36.992893 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.068867 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.093129 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.150319 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.157398 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.181324 4757 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.199014 4757 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.208661 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:15:37 crc kubenswrapper[4757]: E0129 15:15:37.410727 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.450780 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.730209 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.799761 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.850715 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.861261 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.880542 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.905802 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 15:15:37 crc kubenswrapper[4757]: I0129 15:15:37.989602 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.010942 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.042322 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.042458 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.065055 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.089506 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.149038 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.213815 4757 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.270953 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.271441 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.309558 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.321637 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.449831 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.449911 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.475727 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.527044 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.532355 4757 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.551942 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.714736 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.743106 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.889597 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.896399 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.951937 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 15:15:38 crc kubenswrapper[4757]: I0129 15:15:38.971536 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.009295 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.071131 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.279532 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.342571 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.366294 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.439831 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.484485 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.491490 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.545890 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.579716 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.692021 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.693993 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.831867 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.877160 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 15:15:39 crc kubenswrapper[4757]: I0129 15:15:39.943167 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.033159 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.038286 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.041916 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.066766 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.120323 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.148750 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.198326 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.216156 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.220943 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.249177 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.387688 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 15:15:40 crc kubenswrapper[4757]: E0129 15:15:40.398611 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.453438 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.482148 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.515410 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.534523 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.554909 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.566389 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.568993 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.649050 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.683063 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.724470 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.745130 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.879331 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.963433 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.980732 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.985303 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 15:15:40 crc kubenswrapper[4757]: I0129 15:15:40.988836 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.010750 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.069390 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.147068 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.199562 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.208600 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.224103 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.224142 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.252760 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.280763 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.294168 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.321969 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.381724 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:15:41 crc kubenswrapper[4757]: E0129 15:15:41.397757 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.440755 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.443987 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.512785 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.562513 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.769410 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.787005 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.841208 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.891032 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 15:15:41 crc kubenswrapper[4757]: I0129 15:15:41.900148 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.087161 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.112614 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.128596 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.147361 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.226132 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.298657 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.365146 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.375077 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.395649 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.409450 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.418781 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.451521 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.483448 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.528226 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.658962 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 15:15:42 crc kubenswrapper[4757]: I0129 15:15:42.686334 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.094525 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.299407 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.303759 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 15:15:43 crc kubenswrapper[4757]: E0129 15:15:43.398251 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.584931 4757 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.585668 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" podStartSLOduration=46.585651004 podStartE2EDuration="46.585651004s" podCreationTimestamp="2026-01-29 15:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:15:19.555156845 +0000 UTC m=+282.844407082" watchObservedRunningTime="2026-01-29 15:15:43.585651004 +0000 UTC m=+306.874901241" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.589208 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mg555","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.589261 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6fd99bdd67-cctrw","openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx","openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:15:43 crc kubenswrapper[4757]: E0129 15:15:43.589458 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" containerName="installer" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.589472 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" containerName="installer" Jan 29 15:15:43 crc kubenswrapper[4757]: E0129 15:15:43.589504 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d54611-82e4-4698-b654-62a1d7144225" containerName="oauth-openshift" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.589515 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d54611-82e4-4698-b654-62a1d7144225" containerName="oauth-openshift" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.589609 4757 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="341afdcd-2c99-472f-9792-0ddd254aeab2" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.589644 4757 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="341afdcd-2c99-472f-9792-0ddd254aeab2" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.589631 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d54611-82e4-4698-b654-62a1d7144225" containerName="oauth-openshift" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.589779 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1d7fd44-c8eb-46b1-a4a2-d0b983ce77ae" containerName="installer" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.590278 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.590712 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.590988 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5"] Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.591308 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.597988 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.598517 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.598648 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.600900 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.601298 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.601428 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.601534 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.601820 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.601924 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.602022 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.602145 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.602241 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.602359 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.602448 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.602537 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.602634 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.602720 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.602814 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.602906 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.603000 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.622195 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.622871 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.622058 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.633880 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.636928 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.711259 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.711520 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734081 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-user-template-error\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734156 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d5f590-67b3-4e37-a9e1-55d509992f17-config\") pod \"route-controller-manager-7f8dbbc44b-xd2zz\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734180 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18027a76-8991-403e-8dec-d0115c4cb164-secret-volume\") pod \"collect-profiles-29494995-ncxzx\" (UID: \"18027a76-8991-403e-8dec-d0115c4cb164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734200 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18027a76-8991-403e-8dec-d0115c4cb164-config-volume\") pod \"collect-profiles-29494995-ncxzx\" (UID: \"18027a76-8991-403e-8dec-d0115c4cb164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734217 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734234 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6d06924-4a31-421e-8e42-1d3900edd191-audit-policies\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734279 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734296 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734313 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734335 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7kcn\" (UniqueName: \"kubernetes.io/projected/18027a76-8991-403e-8dec-d0115c4cb164-kube-api-access-p7kcn\") pod \"collect-profiles-29494995-ncxzx\" (UID: \"18027a76-8991-403e-8dec-d0115c4cb164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734356 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-session\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734372 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734386 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-user-template-login\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734418 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7qwz\" (UniqueName: \"kubernetes.io/projected/e5d5f590-67b3-4e37-a9e1-55d509992f17-kube-api-access-h7qwz\") pod \"route-controller-manager-7f8dbbc44b-xd2zz\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734434 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734454 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7njkt\" (UniqueName: \"kubernetes.io/projected/f6d06924-4a31-421e-8e42-1d3900edd191-kube-api-access-7njkt\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734477 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6d06924-4a31-421e-8e42-1d3900edd191-audit-dir\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734493 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d5f590-67b3-4e37-a9e1-55d509992f17-client-ca\") pod \"route-controller-manager-7f8dbbc44b-xd2zz\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734511 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734529 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.734559 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d5f590-67b3-4e37-a9e1-55d509992f17-serving-cert\") pod \"route-controller-manager-7f8dbbc44b-xd2zz\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.835290 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-session\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.835645 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.835777 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-user-template-login\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.835895 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7qwz\" (UniqueName: \"kubernetes.io/projected/e5d5f590-67b3-4e37-a9e1-55d509992f17-kube-api-access-h7qwz\") pod \"route-controller-manager-7f8dbbc44b-xd2zz\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.836005 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.836112 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7njkt\" (UniqueName: \"kubernetes.io/projected/f6d06924-4a31-421e-8e42-1d3900edd191-kube-api-access-7njkt\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.836219 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6d06924-4a31-421e-8e42-1d3900edd191-audit-dir\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.836403 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d5f590-67b3-4e37-a9e1-55d509992f17-client-ca\") pod \"route-controller-manager-7f8dbbc44b-xd2zz\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.836332 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6d06924-4a31-421e-8e42-1d3900edd191-audit-dir\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.836535 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.837324 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.837487 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.837617 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d5f590-67b3-4e37-a9e1-55d509992f17-serving-cert\") pod \"route-controller-manager-7f8dbbc44b-xd2zz\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.837758 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-user-template-error\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.837911 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d5f590-67b3-4e37-a9e1-55d509992f17-config\") pod \"route-controller-manager-7f8dbbc44b-xd2zz\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.837298 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d5f590-67b3-4e37-a9e1-55d509992f17-client-ca\") pod \"route-controller-manager-7f8dbbc44b-xd2zz\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.838099 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18027a76-8991-403e-8dec-d0115c4cb164-secret-volume\") pod \"collect-profiles-29494995-ncxzx\" (UID: \"18027a76-8991-403e-8dec-d0115c4cb164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.838221 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18027a76-8991-403e-8dec-d0115c4cb164-config-volume\") pod \"collect-profiles-29494995-ncxzx\" (UID: \"18027a76-8991-403e-8dec-d0115c4cb164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.838352 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.838456 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6d06924-4a31-421e-8e42-1d3900edd191-audit-policies\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.838597 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.838717 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.838866 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.838975 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7kcn\" (UniqueName: \"kubernetes.io/projected/18027a76-8991-403e-8dec-d0115c4cb164-kube-api-access-p7kcn\") pod \"collect-profiles-29494995-ncxzx\" (UID: \"18027a76-8991-403e-8dec-d0115c4cb164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.839545 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-service-ca\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.840026 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6d06924-4a31-421e-8e42-1d3900edd191-audit-policies\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.842469 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.847294 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-session\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.847822 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d5f590-67b3-4e37-a9e1-55d509992f17-serving-cert\") pod \"route-controller-manager-7f8dbbc44b-xd2zz\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.849105 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.849902 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d5f590-67b3-4e37-a9e1-55d509992f17-config\") pod \"route-controller-manager-7f8dbbc44b-xd2zz\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.851602 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18027a76-8991-403e-8dec-d0115c4cb164-config-volume\") pod \"collect-profiles-29494995-ncxzx\" (UID: \"18027a76-8991-403e-8dec-d0115c4cb164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.851947 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-user-template-login\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.853336 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.855697 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-user-template-error\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.856942 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.857169 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18027a76-8991-403e-8dec-d0115c4cb164-secret-volume\") pod \"collect-profiles-29494995-ncxzx\" (UID: \"18027a76-8991-403e-8dec-d0115c4cb164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.857440 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-system-router-certs\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.859748 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7qwz\" (UniqueName: \"kubernetes.io/projected/e5d5f590-67b3-4e37-a9e1-55d509992f17-kube-api-access-h7qwz\") pod \"route-controller-manager-7f8dbbc44b-xd2zz\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.864502 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7kcn\" (UniqueName: \"kubernetes.io/projected/18027a76-8991-403e-8dec-d0115c4cb164-kube-api-access-p7kcn\") pod \"collect-profiles-29494995-ncxzx\" (UID: \"18027a76-8991-403e-8dec-d0115c4cb164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.865689 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6d06924-4a31-421e-8e42-1d3900edd191-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.868573 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7njkt\" (UniqueName: \"kubernetes.io/projected/f6d06924-4a31-421e-8e42-1d3900edd191-kube-api-access-7njkt\") pod \"oauth-openshift-6fd99bdd67-cctrw\" (UID: \"f6d06924-4a31-421e-8e42-1d3900edd191\") " pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.912994 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.929243 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.935552 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.945986 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 15:15:43 crc kubenswrapper[4757]: I0129 15:15:43.976252 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 15:15:44 crc kubenswrapper[4757]: I0129 15:15:44.035127 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 15:15:44 crc kubenswrapper[4757]: I0129 15:15:44.089740 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 15:15:44 crc kubenswrapper[4757]: I0129 15:15:44.092029 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 15:15:44 crc kubenswrapper[4757]: I0129 15:15:44.370887 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 15:15:44 crc kubenswrapper[4757]: I0129 15:15:44.510400 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 15:15:44 crc kubenswrapper[4757]: I0129 15:15:44.543858 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 15:15:44 crc kubenswrapper[4757]: I0129 15:15:44.560091 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 15:15:44 crc kubenswrapper[4757]: I0129 15:15:44.678410 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 15:15:44 crc kubenswrapper[4757]: I0129 15:15:44.698172 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 15:15:44 crc kubenswrapper[4757]: I0129 15:15:44.825447 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 15:15:44 crc kubenswrapper[4757]: I0129 15:15:44.895992 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 15:15:44 crc kubenswrapper[4757]: I0129 15:15:44.998337 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.123144 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.123126455 podStartE2EDuration="26.123126455s" podCreationTimestamp="2026-01-29 15:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:15:43.690355934 +0000 UTC m=+306.979606181" watchObservedRunningTime="2026-01-29 15:15:45.123126455 +0000 UTC m=+308.412376692" Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.126196 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fd99bdd67-cctrw"] Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.141981 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx"] Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.160363 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz"] Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.194454 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.241867 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.408792 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d54611-82e4-4698-b654-62a1d7144225" path="/var/lib/kubelet/pods/e9d54611-82e4-4698-b654-62a1d7144225/volumes" Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.420599 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.501390 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.584539 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6fd99bdd67-cctrw"] Jan 29 15:15:45 crc kubenswrapper[4757]: W0129 15:15:45.588971 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6d06924_4a31_421e_8e42_1d3900edd191.slice/crio-3058c68e424d9a8d8cd6006fcc9fdb00e74cd51d16660ae2ee389f97a9df7e0e WatchSource:0}: Error finding container 3058c68e424d9a8d8cd6006fcc9fdb00e74cd51d16660ae2ee389f97a9df7e0e: Status 404 returned error can't find the container with id 3058c68e424d9a8d8cd6006fcc9fdb00e74cd51d16660ae2ee389f97a9df7e0e Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.598210 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz"] Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.666904 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx"] Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.671728 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" event={"ID":"f6d06924-4a31-421e-8e42-1d3900edd191","Type":"ContainerStarted","Data":"3058c68e424d9a8d8cd6006fcc9fdb00e74cd51d16660ae2ee389f97a9df7e0e"} Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.673104 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" event={"ID":"e5d5f590-67b3-4e37-a9e1-55d509992f17","Type":"ContainerStarted","Data":"b36eee65ea3ddd12eb290df0bb195db0b49476681ea5856640a97dbb1dc10446"} Jan 29 15:15:45 crc kubenswrapper[4757]: W0129 15:15:45.682459 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18027a76_8991_403e_8dec_d0115c4cb164.slice/crio-7be595b9dbacd4c3d22917535e23cfdbc8ed26152ce48821ab8b0bc1c8e8f480 WatchSource:0}: Error finding container 7be595b9dbacd4c3d22917535e23cfdbc8ed26152ce48821ab8b0bc1c8e8f480: Status 404 returned error can't find the container with id 7be595b9dbacd4c3d22917535e23cfdbc8ed26152ce48821ab8b0bc1c8e8f480 Jan 29 15:15:45 crc kubenswrapper[4757]: I0129 15:15:45.757688 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 15:15:46 crc kubenswrapper[4757]: I0129 15:15:46.475541 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 15:15:46 crc kubenswrapper[4757]: I0129 15:15:46.680872 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" event={"ID":"f6d06924-4a31-421e-8e42-1d3900edd191","Type":"ContainerStarted","Data":"d4629a58ae6622b9bc52b7437a3bd0539a8713458de0410a62d611bd46d2ba3f"} Jan 29 15:15:46 crc kubenswrapper[4757]: I0129 15:15:46.681210 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:46 crc kubenswrapper[4757]: I0129 15:15:46.682445 4757 generic.go:334] "Generic (PLEG): container finished" podID="18027a76-8991-403e-8dec-d0115c4cb164" containerID="aa2d045fd021df6521000bca3bf6784d55ac4d235404f9bd5f47a93e9ec0b0f4" exitCode=0 Jan 29 15:15:46 crc kubenswrapper[4757]: I0129 15:15:46.682582 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" event={"ID":"18027a76-8991-403e-8dec-d0115c4cb164","Type":"ContainerDied","Data":"aa2d045fd021df6521000bca3bf6784d55ac4d235404f9bd5f47a93e9ec0b0f4"} Jan 29 15:15:46 crc kubenswrapper[4757]: I0129 15:15:46.682601 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" event={"ID":"18027a76-8991-403e-8dec-d0115c4cb164","Type":"ContainerStarted","Data":"7be595b9dbacd4c3d22917535e23cfdbc8ed26152ce48821ab8b0bc1c8e8f480"} Jan 29 15:15:46 crc kubenswrapper[4757]: I0129 15:15:46.684225 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" event={"ID":"e5d5f590-67b3-4e37-a9e1-55d509992f17","Type":"ContainerStarted","Data":"412210532c444f47f8c06c84f5caf48ca48302b78cd3249b5e2656d1ef2329d1"} Jan 29 15:15:46 crc kubenswrapper[4757]: I0129 15:15:46.684798 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:46 crc kubenswrapper[4757]: I0129 15:15:46.687567 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" Jan 29 15:15:46 crc kubenswrapper[4757]: I0129 15:15:46.689134 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:46 crc kubenswrapper[4757]: I0129 15:15:46.702860 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6fd99bdd67-cctrw" podStartSLOduration=58.702844707 podStartE2EDuration="58.702844707s" podCreationTimestamp="2026-01-29 15:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:15:46.700340398 +0000 UTC m=+309.989590645" watchObservedRunningTime="2026-01-29 15:15:46.702844707 +0000 UTC m=+309.992094954" Jan 29 15:15:46 crc kubenswrapper[4757]: I0129 15:15:46.750416 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" podStartSLOduration=49.750398 podStartE2EDuration="49.750398s" podCreationTimestamp="2026-01-29 15:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:15:46.749831175 +0000 UTC m=+310.039081412" watchObservedRunningTime="2026-01-29 15:15:46.750398 +0000 UTC m=+310.039648237" Jan 29 15:15:47 crc kubenswrapper[4757]: E0129 15:15:47.401195 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:15:47 crc kubenswrapper[4757]: E0129 15:15:47.525982 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:15:47 crc kubenswrapper[4757]: E0129 15:15:47.526366 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bg5b9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-57qth_openshift-marketplace(d4596539-1be7-44ac-8e25-3fd37c823166): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:15:47 crc kubenswrapper[4757]: E0129 15:15:47.527640 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:15:47 crc kubenswrapper[4757]: I0129 15:15:47.920221 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" Jan 29 15:15:47 crc kubenswrapper[4757]: I0129 15:15:47.998133 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7kcn\" (UniqueName: \"kubernetes.io/projected/18027a76-8991-403e-8dec-d0115c4cb164-kube-api-access-p7kcn\") pod \"18027a76-8991-403e-8dec-d0115c4cb164\" (UID: \"18027a76-8991-403e-8dec-d0115c4cb164\") " Jan 29 15:15:47 crc kubenswrapper[4757]: I0129 15:15:47.998330 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18027a76-8991-403e-8dec-d0115c4cb164-config-volume\") pod \"18027a76-8991-403e-8dec-d0115c4cb164\" (UID: \"18027a76-8991-403e-8dec-d0115c4cb164\") " Jan 29 15:15:47 crc kubenswrapper[4757]: I0129 15:15:47.998394 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18027a76-8991-403e-8dec-d0115c4cb164-secret-volume\") pod \"18027a76-8991-403e-8dec-d0115c4cb164\" (UID: \"18027a76-8991-403e-8dec-d0115c4cb164\") " Jan 29 15:15:47 crc kubenswrapper[4757]: I0129 15:15:47.998798 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18027a76-8991-403e-8dec-d0115c4cb164-config-volume" (OuterVolumeSpecName: "config-volume") pod "18027a76-8991-403e-8dec-d0115c4cb164" (UID: "18027a76-8991-403e-8dec-d0115c4cb164"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:48 crc kubenswrapper[4757]: I0129 15:15:48.003395 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18027a76-8991-403e-8dec-d0115c4cb164-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "18027a76-8991-403e-8dec-d0115c4cb164" (UID: "18027a76-8991-403e-8dec-d0115c4cb164"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:48 crc kubenswrapper[4757]: I0129 15:15:48.003415 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18027a76-8991-403e-8dec-d0115c4cb164-kube-api-access-p7kcn" (OuterVolumeSpecName: "kube-api-access-p7kcn") pod "18027a76-8991-403e-8dec-d0115c4cb164" (UID: "18027a76-8991-403e-8dec-d0115c4cb164"). InnerVolumeSpecName "kube-api-access-p7kcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:15:48 crc kubenswrapper[4757]: I0129 15:15:48.100009 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7kcn\" (UniqueName: \"kubernetes.io/projected/18027a76-8991-403e-8dec-d0115c4cb164-kube-api-access-p7kcn\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:48 crc kubenswrapper[4757]: I0129 15:15:48.100060 4757 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18027a76-8991-403e-8dec-d0115c4cb164-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:48 crc kubenswrapper[4757]: I0129 15:15:48.100076 4757 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/18027a76-8991-403e-8dec-d0115c4cb164-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:48 crc kubenswrapper[4757]: E0129 15:15:48.562843 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:15:48 crc kubenswrapper[4757]: E0129 15:15:48.563026 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-swndd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-v8v75_openshift-marketplace(bce413ab-1d96-4e66-b700-db27f6b52966): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:15:48 crc kubenswrapper[4757]: E0129 15:15:48.564215 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:15:48 crc kubenswrapper[4757]: I0129 15:15:48.699304 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" event={"ID":"18027a76-8991-403e-8dec-d0115c4cb164","Type":"ContainerDied","Data":"7be595b9dbacd4c3d22917535e23cfdbc8ed26152ce48821ab8b0bc1c8e8f480"} Jan 29 15:15:48 crc kubenswrapper[4757]: I0129 15:15:48.699410 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7be595b9dbacd4c3d22917535e23cfdbc8ed26152ce48821ab8b0bc1c8e8f480" Jan 29 15:15:48 crc kubenswrapper[4757]: I0129 15:15:48.699437 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx" Jan 29 15:15:49 crc kubenswrapper[4757]: E0129 15:15:49.520513 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:15:49 crc kubenswrapper[4757]: E0129 15:15:49.520677 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgwj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jhlrf_openshift-marketplace(92724a14-21db-441f-b509-142dc0a8dc15): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:15:49 crc kubenswrapper[4757]: E0129 15:15:49.521840 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:15:50 crc kubenswrapper[4757]: E0129 15:15:50.530164 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:15:50 crc kubenswrapper[4757]: E0129 15:15:50.530358 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvvcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-btp4k_openshift-marketplace(f2342b27-9060-4697-a957-65d07f099e82): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:15:50 crc kubenswrapper[4757]: E0129 15:15:50.531594 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:15:52 crc kubenswrapper[4757]: E0129 15:15:52.554181 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:15:52 crc kubenswrapper[4757]: E0129 15:15:52.554609 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8p6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c5pw7_openshift-marketplace(4e10b6b9-259a-417c-ba5d-311e75543637): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:15:52 crc kubenswrapper[4757]: E0129 15:15:52.556336 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:15:53 crc kubenswrapper[4757]: I0129 15:15:53.297061 4757 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:15:53 crc kubenswrapper[4757]: I0129 15:15:53.297299 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://84cea0adb2352dc8deeaa3d313d1470f144e1db6913b7e8127a63bc54a2ea988" gracePeriod=5 Jan 29 15:15:54 crc kubenswrapper[4757]: E0129 15:15:54.517932 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:15:54 crc kubenswrapper[4757]: E0129 15:15:54.518079 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm2n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2jc8z_openshift-marketplace(43de85f7-11df-4e6f-8d3f-b982b03ce802): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:15:54 crc kubenswrapper[4757]: E0129 15:15:54.519643 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:15:56 crc kubenswrapper[4757]: E0129 15:15:56.518184 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:15:56 crc kubenswrapper[4757]: E0129 15:15:56.518419 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bjf2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pxw6w_openshift-marketplace(fd7070d7-3870-49f1-8976-094ad97b6efc): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:15:56 crc kubenswrapper[4757]: E0129 15:15:56.520014 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.175242 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5"] Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.175731 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" podUID="7fea9b3d-4277-4a7f-92e6-23c5431051e4" containerName="controller-manager" containerID="cri-o://9fe6fd9260cc4c532c528ffd70cf74beff48b61dbd2f1a53ef74ca7d0ac89e1d" gracePeriod=30 Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.270951 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz"] Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.271185 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" podUID="e5d5f590-67b3-4e37-a9e1-55d509992f17" containerName="route-controller-manager" containerID="cri-o://412210532c444f47f8c06c84f5caf48ca48302b78cd3249b5e2656d1ef2329d1" gracePeriod=30 Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.757478 4757 generic.go:334] "Generic (PLEG): container finished" podID="7fea9b3d-4277-4a7f-92e6-23c5431051e4" containerID="9fe6fd9260cc4c532c528ffd70cf74beff48b61dbd2f1a53ef74ca7d0ac89e1d" exitCode=0 Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.757548 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" event={"ID":"7fea9b3d-4277-4a7f-92e6-23c5431051e4","Type":"ContainerDied","Data":"9fe6fd9260cc4c532c528ffd70cf74beff48b61dbd2f1a53ef74ca7d0ac89e1d"} Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.757576 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" event={"ID":"7fea9b3d-4277-4a7f-92e6-23c5431051e4","Type":"ContainerDied","Data":"fb2fb4acd8015d5f2bdaff348ce61c9fb2426988d437590c1a0560190d5e0986"} Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.757589 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb2fb4acd8015d5f2bdaff348ce61c9fb2426988d437590c1a0560190d5e0986" Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.760415 4757 generic.go:334] "Generic (PLEG): container finished" podID="e5d5f590-67b3-4e37-a9e1-55d509992f17" containerID="412210532c444f47f8c06c84f5caf48ca48302b78cd3249b5e2656d1ef2329d1" exitCode=0 Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.760453 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" event={"ID":"e5d5f590-67b3-4e37-a9e1-55d509992f17","Type":"ContainerDied","Data":"412210532c444f47f8c06c84f5caf48ca48302b78cd3249b5e2656d1ef2329d1"} Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.760475 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" event={"ID":"e5d5f590-67b3-4e37-a9e1-55d509992f17","Type":"ContainerDied","Data":"b36eee65ea3ddd12eb290df0bb195db0b49476681ea5856640a97dbb1dc10446"} Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.760490 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b36eee65ea3ddd12eb290df0bb195db0b49476681ea5856640a97dbb1dc10446" Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.763313 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.768149 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.932823 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-client-ca\") pod \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.933126 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxjs6\" (UniqueName: \"kubernetes.io/projected/7fea9b3d-4277-4a7f-92e6-23c5431051e4-kube-api-access-wxjs6\") pod \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.933183 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d5f590-67b3-4e37-a9e1-55d509992f17-serving-cert\") pod \"e5d5f590-67b3-4e37-a9e1-55d509992f17\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.933207 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-config\") pod \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.933222 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d5f590-67b3-4e37-a9e1-55d509992f17-client-ca\") pod \"e5d5f590-67b3-4e37-a9e1-55d509992f17\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.933246 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-proxy-ca-bundles\") pod \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.933292 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d5f590-67b3-4e37-a9e1-55d509992f17-config\") pod \"e5d5f590-67b3-4e37-a9e1-55d509992f17\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.933329 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7qwz\" (UniqueName: \"kubernetes.io/projected/e5d5f590-67b3-4e37-a9e1-55d509992f17-kube-api-access-h7qwz\") pod \"e5d5f590-67b3-4e37-a9e1-55d509992f17\" (UID: \"e5d5f590-67b3-4e37-a9e1-55d509992f17\") " Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.933352 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fea9b3d-4277-4a7f-92e6-23c5431051e4-serving-cert\") pod \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\" (UID: \"7fea9b3d-4277-4a7f-92e6-23c5431051e4\") " Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.935172 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7fea9b3d-4277-4a7f-92e6-23c5431051e4" (UID: "7fea9b3d-4277-4a7f-92e6-23c5431051e4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.935246 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-config" (OuterVolumeSpecName: "config") pod "7fea9b3d-4277-4a7f-92e6-23c5431051e4" (UID: "7fea9b3d-4277-4a7f-92e6-23c5431051e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.935378 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d5f590-67b3-4e37-a9e1-55d509992f17-config" (OuterVolumeSpecName: "config") pod "e5d5f590-67b3-4e37-a9e1-55d509992f17" (UID: "e5d5f590-67b3-4e37-a9e1-55d509992f17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.935534 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-client-ca" (OuterVolumeSpecName: "client-ca") pod "7fea9b3d-4277-4a7f-92e6-23c5431051e4" (UID: "7fea9b3d-4277-4a7f-92e6-23c5431051e4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.939615 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fea9b3d-4277-4a7f-92e6-23c5431051e4-kube-api-access-wxjs6" (OuterVolumeSpecName: "kube-api-access-wxjs6") pod "7fea9b3d-4277-4a7f-92e6-23c5431051e4" (UID: "7fea9b3d-4277-4a7f-92e6-23c5431051e4"). InnerVolumeSpecName "kube-api-access-wxjs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.939841 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d5f590-67b3-4e37-a9e1-55d509992f17-kube-api-access-h7qwz" (OuterVolumeSpecName: "kube-api-access-h7qwz") pod "e5d5f590-67b3-4e37-a9e1-55d509992f17" (UID: "e5d5f590-67b3-4e37-a9e1-55d509992f17"). InnerVolumeSpecName "kube-api-access-h7qwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.940553 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d5f590-67b3-4e37-a9e1-55d509992f17-client-ca" (OuterVolumeSpecName: "client-ca") pod "e5d5f590-67b3-4e37-a9e1-55d509992f17" (UID: "e5d5f590-67b3-4e37-a9e1-55d509992f17"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.940691 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5d5f590-67b3-4e37-a9e1-55d509992f17-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e5d5f590-67b3-4e37-a9e1-55d509992f17" (UID: "e5d5f590-67b3-4e37-a9e1-55d509992f17"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:57 crc kubenswrapper[4757]: I0129 15:15:57.940991 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fea9b3d-4277-4a7f-92e6-23c5431051e4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7fea9b3d-4277-4a7f-92e6-23c5431051e4" (UID: "7fea9b3d-4277-4a7f-92e6-23c5431051e4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.034945 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7qwz\" (UniqueName: \"kubernetes.io/projected/e5d5f590-67b3-4e37-a9e1-55d509992f17-kube-api-access-h7qwz\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.034993 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fea9b3d-4277-4a7f-92e6-23c5431051e4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.035005 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.035016 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxjs6\" (UniqueName: \"kubernetes.io/projected/7fea9b3d-4277-4a7f-92e6-23c5431051e4-kube-api-access-wxjs6\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.035028 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d5f590-67b3-4e37-a9e1-55d509992f17-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.035041 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.035055 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d5f590-67b3-4e37-a9e1-55d509992f17-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.035066 4757 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fea9b3d-4277-4a7f-92e6-23c5431051e4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.035077 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d5f590-67b3-4e37-a9e1-55d509992f17-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.382193 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-846d6678c7-7s652"] Jan 29 15:15:58 crc kubenswrapper[4757]: E0129 15:15:58.382490 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18027a76-8991-403e-8dec-d0115c4cb164" containerName="collect-profiles" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.382512 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="18027a76-8991-403e-8dec-d0115c4cb164" containerName="collect-profiles" Jan 29 15:15:58 crc kubenswrapper[4757]: E0129 15:15:58.382523 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fea9b3d-4277-4a7f-92e6-23c5431051e4" containerName="controller-manager" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.382531 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fea9b3d-4277-4a7f-92e6-23c5431051e4" containerName="controller-manager" Jan 29 15:15:58 crc kubenswrapper[4757]: E0129 15:15:58.382541 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.382548 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:15:58 crc kubenswrapper[4757]: E0129 15:15:58.382565 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d5f590-67b3-4e37-a9e1-55d509992f17" containerName="route-controller-manager" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.382573 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d5f590-67b3-4e37-a9e1-55d509992f17" containerName="route-controller-manager" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.382734 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d5f590-67b3-4e37-a9e1-55d509992f17" containerName="route-controller-manager" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.382746 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fea9b3d-4277-4a7f-92e6-23c5431051e4" containerName="controller-manager" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.382762 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="18027a76-8991-403e-8dec-d0115c4cb164" containerName="collect-profiles" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.382772 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.383196 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.386951 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4"] Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.387734 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.389853 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4"] Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.398560 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-846d6678c7-7s652"] Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.542536 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlzh4\" (UniqueName: \"kubernetes.io/projected/c1b91348-aa42-47e7-8798-b893952d5e0e-kube-api-access-xlzh4\") pod \"route-controller-manager-6cd7995f46-29mq4\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.542592 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1af9a98-c7e2-4985-aee1-9357fd453ad5-serving-cert\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.542633 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-client-ca\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.542664 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-config\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.542687 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1b91348-aa42-47e7-8798-b893952d5e0e-client-ca\") pod \"route-controller-manager-6cd7995f46-29mq4\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.542716 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1b91348-aa42-47e7-8798-b893952d5e0e-serving-cert\") pod \"route-controller-manager-6cd7995f46-29mq4\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.542757 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8lqp\" (UniqueName: \"kubernetes.io/projected/a1af9a98-c7e2-4985-aee1-9357fd453ad5-kube-api-access-n8lqp\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.542888 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-proxy-ca-bundles\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.542935 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b91348-aa42-47e7-8798-b893952d5e0e-config\") pod \"route-controller-manager-6cd7995f46-29mq4\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.644480 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlzh4\" (UniqueName: \"kubernetes.io/projected/c1b91348-aa42-47e7-8798-b893952d5e0e-kube-api-access-xlzh4\") pod \"route-controller-manager-6cd7995f46-29mq4\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.644541 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1af9a98-c7e2-4985-aee1-9357fd453ad5-serving-cert\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.644573 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-client-ca\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.644600 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-config\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.644621 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1b91348-aa42-47e7-8798-b893952d5e0e-client-ca\") pod \"route-controller-manager-6cd7995f46-29mq4\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.644649 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1b91348-aa42-47e7-8798-b893952d5e0e-serving-cert\") pod \"route-controller-manager-6cd7995f46-29mq4\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.644676 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8lqp\" (UniqueName: \"kubernetes.io/projected/a1af9a98-c7e2-4985-aee1-9357fd453ad5-kube-api-access-n8lqp\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.644713 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-proxy-ca-bundles\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.644733 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b91348-aa42-47e7-8798-b893952d5e0e-config\") pod \"route-controller-manager-6cd7995f46-29mq4\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.645980 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1b91348-aa42-47e7-8798-b893952d5e0e-client-ca\") pod \"route-controller-manager-6cd7995f46-29mq4\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.646372 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-client-ca\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.646788 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-proxy-ca-bundles\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.646899 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b91348-aa42-47e7-8798-b893952d5e0e-config\") pod \"route-controller-manager-6cd7995f46-29mq4\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.647201 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-config\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.648904 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1b91348-aa42-47e7-8798-b893952d5e0e-serving-cert\") pod \"route-controller-manager-6cd7995f46-29mq4\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.650606 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1af9a98-c7e2-4985-aee1-9357fd453ad5-serving-cert\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.661478 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlzh4\" (UniqueName: \"kubernetes.io/projected/c1b91348-aa42-47e7-8798-b893952d5e0e-kube-api-access-xlzh4\") pod \"route-controller-manager-6cd7995f46-29mq4\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.662082 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8lqp\" (UniqueName: \"kubernetes.io/projected/a1af9a98-c7e2-4985-aee1-9357fd453ad5-kube-api-access-n8lqp\") pod \"controller-manager-846d6678c7-7s652\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.721115 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.733511 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.767998 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.768049 4757 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="84cea0adb2352dc8deeaa3d313d1470f144e1db6913b7e8127a63bc54a2ea988" exitCode=137 Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.768123 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.768165 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.800818 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5"] Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.824712 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-85bc4bdcd-5zkz5"] Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.833627 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz"] Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.842714 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8dbbc44b-xd2zz"] Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.965559 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:15:58 crc kubenswrapper[4757]: I0129 15:15:58.965649 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.059019 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.059109 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.059151 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.059201 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.059244 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.059949 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.059984 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.060030 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.060120 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.060318 4757 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.060333 4757 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.060341 4757 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.060350 4757 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.064811 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.155940 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4"] Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.162906 4757 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.210735 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-846d6678c7-7s652"] Jan 29 15:15:59 crc kubenswrapper[4757]: W0129 15:15:59.219463 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1af9a98_c7e2_4985_aee1_9357fd453ad5.slice/crio-b9285529236dfd64a33b59e4eb9f3080bfe43b864859b9fb49858452ddc87c48 WatchSource:0}: Error finding container b9285529236dfd64a33b59e4eb9f3080bfe43b864859b9fb49858452ddc87c48: Status 404 returned error can't find the container with id b9285529236dfd64a33b59e4eb9f3080bfe43b864859b9fb49858452ddc87c48 Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.404035 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fea9b3d-4277-4a7f-92e6-23c5431051e4" path="/var/lib/kubelet/pods/7fea9b3d-4277-4a7f-92e6-23c5431051e4/volumes" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.405124 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d5f590-67b3-4e37-a9e1-55d509992f17" path="/var/lib/kubelet/pods/e5d5f590-67b3-4e37-a9e1-55d509992f17/volumes" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.405688 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.777162 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.777292 4757 scope.go:117] "RemoveContainer" containerID="84cea0adb2352dc8deeaa3d313d1470f144e1db6913b7e8127a63bc54a2ea988" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.777462 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.780718 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" event={"ID":"c1b91348-aa42-47e7-8798-b893952d5e0e","Type":"ContainerStarted","Data":"f3d90612aa3c7733952739250f868867dfc69793e9e0a41950470080b3ff6191"} Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.780760 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" event={"ID":"c1b91348-aa42-47e7-8798-b893952d5e0e","Type":"ContainerStarted","Data":"5c44ae626597e31482a2133d262c4017b7c5f029724d6f6870e6188fb0d21e72"} Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.781743 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.787807 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" event={"ID":"a1af9a98-c7e2-4985-aee1-9357fd453ad5","Type":"ContainerStarted","Data":"d542e04c40e7d0b32e9e711cd380167b06168e58c35f423be4af5c3c62e85e20"} Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.787856 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" event={"ID":"a1af9a98-c7e2-4985-aee1-9357fd453ad5","Type":"ContainerStarted","Data":"b9285529236dfd64a33b59e4eb9f3080bfe43b864859b9fb49858452ddc87c48"} Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.788809 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.804819 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.821623 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" podStartSLOduration=2.82160478 podStartE2EDuration="2.82160478s" podCreationTimestamp="2026-01-29 15:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:15:59.82087669 +0000 UTC m=+323.110126927" watchObservedRunningTime="2026-01-29 15:15:59.82160478 +0000 UTC m=+323.110855017" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.839893 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" podStartSLOduration=2.839877361 podStartE2EDuration="2.839877361s" podCreationTimestamp="2026-01-29 15:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:15:59.838810321 +0000 UTC m=+323.128060558" watchObservedRunningTime="2026-01-29 15:15:59.839877361 +0000 UTC m=+323.129127598" Jan 29 15:15:59 crc kubenswrapper[4757]: I0129 15:15:59.924302 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:16:00 crc kubenswrapper[4757]: E0129 15:16:00.398329 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:16:01 crc kubenswrapper[4757]: E0129 15:16:01.398294 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:16:01 crc kubenswrapper[4757]: E0129 15:16:01.398464 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:16:02 crc kubenswrapper[4757]: E0129 15:16:02.397987 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:16:05 crc kubenswrapper[4757]: E0129 15:16:05.398287 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:16:07 crc kubenswrapper[4757]: E0129 15:16:07.400975 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:16:08 crc kubenswrapper[4757]: E0129 15:16:08.398386 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:16:08 crc kubenswrapper[4757]: E0129 15:16:08.398384 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:16:11 crc kubenswrapper[4757]: E0129 15:16:11.398835 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:16:12 crc kubenswrapper[4757]: E0129 15:16:12.397675 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:16:12 crc kubenswrapper[4757]: E0129 15:16:12.397811 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:16:13 crc kubenswrapper[4757]: E0129 15:16:13.397472 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.164243 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-846d6678c7-7s652"] Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.164750 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" podUID="a1af9a98-c7e2-4985-aee1-9357fd453ad5" containerName="controller-manager" containerID="cri-o://d542e04c40e7d0b32e9e711cd380167b06168e58c35f423be4af5c3c62e85e20" gracePeriod=30 Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.174613 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4"] Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.174853 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" podUID="c1b91348-aa42-47e7-8798-b893952d5e0e" containerName="route-controller-manager" containerID="cri-o://f3d90612aa3c7733952739250f868867dfc69793e9e0a41950470080b3ff6191" gracePeriod=30 Jan 29 15:16:17 crc kubenswrapper[4757]: E0129 15:16:17.409103 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.887513 4757 generic.go:334] "Generic (PLEG): container finished" podID="c1b91348-aa42-47e7-8798-b893952d5e0e" containerID="f3d90612aa3c7733952739250f868867dfc69793e9e0a41950470080b3ff6191" exitCode=0 Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.887635 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" event={"ID":"c1b91348-aa42-47e7-8798-b893952d5e0e","Type":"ContainerDied","Data":"f3d90612aa3c7733952739250f868867dfc69793e9e0a41950470080b3ff6191"} Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.889504 4757 generic.go:334] "Generic (PLEG): container finished" podID="a1af9a98-c7e2-4985-aee1-9357fd453ad5" containerID="d542e04c40e7d0b32e9e711cd380167b06168e58c35f423be4af5c3c62e85e20" exitCode=0 Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.889565 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" event={"ID":"a1af9a98-c7e2-4985-aee1-9357fd453ad5","Type":"ContainerDied","Data":"d542e04c40e7d0b32e9e711cd380167b06168e58c35f423be4af5c3c62e85e20"} Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.889599 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" event={"ID":"a1af9a98-c7e2-4985-aee1-9357fd453ad5","Type":"ContainerDied","Data":"b9285529236dfd64a33b59e4eb9f3080bfe43b864859b9fb49858452ddc87c48"} Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.889631 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9285529236dfd64a33b59e4eb9f3080bfe43b864859b9fb49858452ddc87c48" Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.911610 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.942964 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-client-ca\") pod \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.943029 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8lqp\" (UniqueName: \"kubernetes.io/projected/a1af9a98-c7e2-4985-aee1-9357fd453ad5-kube-api-access-n8lqp\") pod \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.943056 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-proxy-ca-bundles\") pod \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.943127 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-config\") pod \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.943875 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-client-ca" (OuterVolumeSpecName: "client-ca") pod "a1af9a98-c7e2-4985-aee1-9357fd453ad5" (UID: "a1af9a98-c7e2-4985-aee1-9357fd453ad5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.943890 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a1af9a98-c7e2-4985-aee1-9357fd453ad5" (UID: "a1af9a98-c7e2-4985-aee1-9357fd453ad5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.944006 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1af9a98-c7e2-4985-aee1-9357fd453ad5-serving-cert\") pod \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\" (UID: \"a1af9a98-c7e2-4985-aee1-9357fd453ad5\") " Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.944362 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.944373 4757 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.944467 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-config" (OuterVolumeSpecName: "config") pod "a1af9a98-c7e2-4985-aee1-9357fd453ad5" (UID: "a1af9a98-c7e2-4985-aee1-9357fd453ad5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.953639 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1af9a98-c7e2-4985-aee1-9357fd453ad5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a1af9a98-c7e2-4985-aee1-9357fd453ad5" (UID: "a1af9a98-c7e2-4985-aee1-9357fd453ad5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:16:17 crc kubenswrapper[4757]: I0129 15:16:17.957201 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1af9a98-c7e2-4985-aee1-9357fd453ad5-kube-api-access-n8lqp" (OuterVolumeSpecName: "kube-api-access-n8lqp") pod "a1af9a98-c7e2-4985-aee1-9357fd453ad5" (UID: "a1af9a98-c7e2-4985-aee1-9357fd453ad5"). InnerVolumeSpecName "kube-api-access-n8lqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.044972 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1af9a98-c7e2-4985-aee1-9357fd453ad5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.045002 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8lqp\" (UniqueName: \"kubernetes.io/projected/a1af9a98-c7e2-4985-aee1-9357fd453ad5-kube-api-access-n8lqp\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.045012 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1af9a98-c7e2-4985-aee1-9357fd453ad5-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.154635 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.246606 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1b91348-aa42-47e7-8798-b893952d5e0e-client-ca\") pod \"c1b91348-aa42-47e7-8798-b893952d5e0e\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.246642 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b91348-aa42-47e7-8798-b893952d5e0e-config\") pod \"c1b91348-aa42-47e7-8798-b893952d5e0e\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.246667 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1b91348-aa42-47e7-8798-b893952d5e0e-serving-cert\") pod \"c1b91348-aa42-47e7-8798-b893952d5e0e\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.246707 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlzh4\" (UniqueName: \"kubernetes.io/projected/c1b91348-aa42-47e7-8798-b893952d5e0e-kube-api-access-xlzh4\") pod \"c1b91348-aa42-47e7-8798-b893952d5e0e\" (UID: \"c1b91348-aa42-47e7-8798-b893952d5e0e\") " Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.247356 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b91348-aa42-47e7-8798-b893952d5e0e-client-ca" (OuterVolumeSpecName: "client-ca") pod "c1b91348-aa42-47e7-8798-b893952d5e0e" (UID: "c1b91348-aa42-47e7-8798-b893952d5e0e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.247699 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1b91348-aa42-47e7-8798-b893952d5e0e-config" (OuterVolumeSpecName: "config") pod "c1b91348-aa42-47e7-8798-b893952d5e0e" (UID: "c1b91348-aa42-47e7-8798-b893952d5e0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.249535 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1b91348-aa42-47e7-8798-b893952d5e0e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c1b91348-aa42-47e7-8798-b893952d5e0e" (UID: "c1b91348-aa42-47e7-8798-b893952d5e0e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.249597 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b91348-aa42-47e7-8798-b893952d5e0e-kube-api-access-xlzh4" (OuterVolumeSpecName: "kube-api-access-xlzh4") pod "c1b91348-aa42-47e7-8798-b893952d5e0e" (UID: "c1b91348-aa42-47e7-8798-b893952d5e0e"). InnerVolumeSpecName "kube-api-access-xlzh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.348171 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1b91348-aa42-47e7-8798-b893952d5e0e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.348211 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1b91348-aa42-47e7-8798-b893952d5e0e-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.348222 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1b91348-aa42-47e7-8798-b893952d5e0e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.348236 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlzh4\" (UniqueName: \"kubernetes.io/projected/c1b91348-aa42-47e7-8798-b893952d5e0e-kube-api-access-xlzh4\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.386795 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv"] Jan 29 15:16:18 crc kubenswrapper[4757]: E0129 15:16:18.389180 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1af9a98-c7e2-4985-aee1-9357fd453ad5" containerName="controller-manager" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.389205 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1af9a98-c7e2-4985-aee1-9357fd453ad5" containerName="controller-manager" Jan 29 15:16:18 crc kubenswrapper[4757]: E0129 15:16:18.389231 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b91348-aa42-47e7-8798-b893952d5e0e" containerName="route-controller-manager" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.389239 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b91348-aa42-47e7-8798-b893952d5e0e" containerName="route-controller-manager" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.389391 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b91348-aa42-47e7-8798-b893952d5e0e" containerName="route-controller-manager" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.389406 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1af9a98-c7e2-4985-aee1-9357fd453ad5" containerName="controller-manager" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.389821 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.390003 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9"] Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.390419 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: E0129 15:16:18.398532 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.399762 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9"] Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.407474 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv"] Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.448773 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-proxy-ca-bundles\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.448849 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f9e864-01cf-4960-8faf-a06fb3934a5a-serving-cert\") pod \"route-controller-manager-5dcb9544cc-9n8d9\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.448882 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-config\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.448910 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmvmj\" (UniqueName: \"kubernetes.io/projected/f4f9e864-01cf-4960-8faf-a06fb3934a5a-kube-api-access-nmvmj\") pod \"route-controller-manager-5dcb9544cc-9n8d9\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.448985 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp22j\" (UniqueName: \"kubernetes.io/projected/6a7bc6af-ed42-49e9-809a-1716dc426216-kube-api-access-fp22j\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.449008 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a7bc6af-ed42-49e9-809a-1716dc426216-serving-cert\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.449031 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-client-ca\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.449067 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4f9e864-01cf-4960-8faf-a06fb3934a5a-client-ca\") pod \"route-controller-manager-5dcb9544cc-9n8d9\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.449090 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f9e864-01cf-4960-8faf-a06fb3934a5a-config\") pod \"route-controller-manager-5dcb9544cc-9n8d9\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.549730 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp22j\" (UniqueName: \"kubernetes.io/projected/6a7bc6af-ed42-49e9-809a-1716dc426216-kube-api-access-fp22j\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.549799 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a7bc6af-ed42-49e9-809a-1716dc426216-serving-cert\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.549866 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-client-ca\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.549924 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4f9e864-01cf-4960-8faf-a06fb3934a5a-client-ca\") pod \"route-controller-manager-5dcb9544cc-9n8d9\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.549953 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f9e864-01cf-4960-8faf-a06fb3934a5a-config\") pod \"route-controller-manager-5dcb9544cc-9n8d9\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.550001 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-proxy-ca-bundles\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.550051 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f9e864-01cf-4960-8faf-a06fb3934a5a-serving-cert\") pod \"route-controller-manager-5dcb9544cc-9n8d9\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.550084 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-config\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.550122 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmvmj\" (UniqueName: \"kubernetes.io/projected/f4f9e864-01cf-4960-8faf-a06fb3934a5a-kube-api-access-nmvmj\") pod \"route-controller-manager-5dcb9544cc-9n8d9\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.551347 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f9e864-01cf-4960-8faf-a06fb3934a5a-config\") pod \"route-controller-manager-5dcb9544cc-9n8d9\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.551515 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4f9e864-01cf-4960-8faf-a06fb3934a5a-client-ca\") pod \"route-controller-manager-5dcb9544cc-9n8d9\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.551805 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-client-ca\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.552133 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-proxy-ca-bundles\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.552347 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-config\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.553464 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f9e864-01cf-4960-8faf-a06fb3934a5a-serving-cert\") pod \"route-controller-manager-5dcb9544cc-9n8d9\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.555998 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a7bc6af-ed42-49e9-809a-1716dc426216-serving-cert\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.565687 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmvmj\" (UniqueName: \"kubernetes.io/projected/f4f9e864-01cf-4960-8faf-a06fb3934a5a-kube-api-access-nmvmj\") pod \"route-controller-manager-5dcb9544cc-9n8d9\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.566116 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp22j\" (UniqueName: \"kubernetes.io/projected/6a7bc6af-ed42-49e9-809a-1716dc426216-kube-api-access-fp22j\") pod \"controller-manager-6cb96b48f7-vfkvv\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.715836 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.739565 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.900987 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.901009 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846d6678c7-7s652" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.901009 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4" event={"ID":"c1b91348-aa42-47e7-8798-b893952d5e0e","Type":"ContainerDied","Data":"5c44ae626597e31482a2133d262c4017b7c5f029724d6f6870e6188fb0d21e72"} Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.901081 4757 scope.go:117] "RemoveContainer" containerID="f3d90612aa3c7733952739250f868867dfc69793e9e0a41950470080b3ff6191" Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.947510 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4"] Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.977302 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd7995f46-29mq4"] Jan 29 15:16:18 crc kubenswrapper[4757]: W0129 15:16:18.982620 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a7bc6af_ed42_49e9_809a_1716dc426216.slice/crio-330c674d8d8f0a18e268f0d20a30bbf7e44dfd187a358295a87185557f2b9cdf WatchSource:0}: Error finding container 330c674d8d8f0a18e268f0d20a30bbf7e44dfd187a358295a87185557f2b9cdf: Status 404 returned error can't find the container with id 330c674d8d8f0a18e268f0d20a30bbf7e44dfd187a358295a87185557f2b9cdf Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.985412 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-846d6678c7-7s652"] Jan 29 15:16:18 crc kubenswrapper[4757]: I0129 15:16:18.998933 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv"] Jan 29 15:16:19 crc kubenswrapper[4757]: I0129 15:16:19.002312 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-846d6678c7-7s652"] Jan 29 15:16:19 crc kubenswrapper[4757]: I0129 15:16:19.253573 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9"] Jan 29 15:16:19 crc kubenswrapper[4757]: E0129 15:16:19.400099 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:16:19 crc kubenswrapper[4757]: I0129 15:16:19.402980 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1af9a98-c7e2-4985-aee1-9357fd453ad5" path="/var/lib/kubelet/pods/a1af9a98-c7e2-4985-aee1-9357fd453ad5/volumes" Jan 29 15:16:19 crc kubenswrapper[4757]: I0129 15:16:19.403712 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b91348-aa42-47e7-8798-b893952d5e0e" path="/var/lib/kubelet/pods/c1b91348-aa42-47e7-8798-b893952d5e0e/volumes" Jan 29 15:16:19 crc kubenswrapper[4757]: I0129 15:16:19.906464 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" event={"ID":"6a7bc6af-ed42-49e9-809a-1716dc426216","Type":"ContainerStarted","Data":"b8ca8ff6e18b3ac996fa70862606c9f8e114388c3af584d7b7360f5f6b7cae89"} Jan 29 15:16:19 crc kubenswrapper[4757]: I0129 15:16:19.906835 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" event={"ID":"6a7bc6af-ed42-49e9-809a-1716dc426216","Type":"ContainerStarted","Data":"330c674d8d8f0a18e268f0d20a30bbf7e44dfd187a358295a87185557f2b9cdf"} Jan 29 15:16:19 crc kubenswrapper[4757]: I0129 15:16:19.906858 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:19 crc kubenswrapper[4757]: I0129 15:16:19.908794 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" event={"ID":"f4f9e864-01cf-4960-8faf-a06fb3934a5a","Type":"ContainerStarted","Data":"ba9df93c5e1999326415e8478c1cee0f2c798888338db3e5b27c3d17e2ff11c7"} Jan 29 15:16:19 crc kubenswrapper[4757]: I0129 15:16:19.908836 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" event={"ID":"f4f9e864-01cf-4960-8faf-a06fb3934a5a","Type":"ContainerStarted","Data":"c7da0a3090a496ccb041377c1c6dbdddaded002b1edddd7029c539abe4b1c8ae"} Jan 29 15:16:19 crc kubenswrapper[4757]: I0129 15:16:19.909002 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:19 crc kubenswrapper[4757]: I0129 15:16:19.911357 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:19 crc kubenswrapper[4757]: I0129 15:16:19.924877 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" podStartSLOduration=2.924859937 podStartE2EDuration="2.924859937s" podCreationTimestamp="2026-01-29 15:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:16:19.923072248 +0000 UTC m=+343.212322495" watchObservedRunningTime="2026-01-29 15:16:19.924859937 +0000 UTC m=+343.214110174" Jan 29 15:16:20 crc kubenswrapper[4757]: I0129 15:16:20.113620 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:16:20 crc kubenswrapper[4757]: I0129 15:16:20.130833 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" podStartSLOduration=3.130818974 podStartE2EDuration="3.130818974s" podCreationTimestamp="2026-01-29 15:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:16:19.963495918 +0000 UTC m=+343.252746175" watchObservedRunningTime="2026-01-29 15:16:20.130818974 +0000 UTC m=+343.420069211" Jan 29 15:16:20 crc kubenswrapper[4757]: E0129 15:16:20.398708 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:16:22 crc kubenswrapper[4757]: E0129 15:16:22.398626 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:16:24 crc kubenswrapper[4757]: E0129 15:16:24.397597 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:16:24 crc kubenswrapper[4757]: E0129 15:16:24.397965 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:16:27 crc kubenswrapper[4757]: E0129 15:16:27.400281 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:16:30 crc kubenswrapper[4757]: E0129 15:16:30.398051 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:16:32 crc kubenswrapper[4757]: E0129 15:16:32.397425 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:16:32 crc kubenswrapper[4757]: E0129 15:16:32.397497 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:16:34 crc kubenswrapper[4757]: E0129 15:16:34.398421 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:16:34 crc kubenswrapper[4757]: E0129 15:16:34.398421 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:16:36 crc kubenswrapper[4757]: E0129 15:16:36.398176 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.175027 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv"] Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.175303 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" podUID="6a7bc6af-ed42-49e9-809a-1716dc426216" containerName="controller-manager" containerID="cri-o://b8ca8ff6e18b3ac996fa70862606c9f8e114388c3af584d7b7360f5f6b7cae89" gracePeriod=30 Jan 29 15:16:37 crc kubenswrapper[4757]: E0129 15:16:37.400618 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.723453 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.826602 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-proxy-ca-bundles\") pod \"6a7bc6af-ed42-49e9-809a-1716dc426216\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.826668 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp22j\" (UniqueName: \"kubernetes.io/projected/6a7bc6af-ed42-49e9-809a-1716dc426216-kube-api-access-fp22j\") pod \"6a7bc6af-ed42-49e9-809a-1716dc426216\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.826736 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-client-ca\") pod \"6a7bc6af-ed42-49e9-809a-1716dc426216\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.826786 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-config\") pod \"6a7bc6af-ed42-49e9-809a-1716dc426216\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.826832 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a7bc6af-ed42-49e9-809a-1716dc426216-serving-cert\") pod \"6a7bc6af-ed42-49e9-809a-1716dc426216\" (UID: \"6a7bc6af-ed42-49e9-809a-1716dc426216\") " Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.827868 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-client-ca" (OuterVolumeSpecName: "client-ca") pod "6a7bc6af-ed42-49e9-809a-1716dc426216" (UID: "6a7bc6af-ed42-49e9-809a-1716dc426216"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.827892 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6a7bc6af-ed42-49e9-809a-1716dc426216" (UID: "6a7bc6af-ed42-49e9-809a-1716dc426216"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.827928 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-config" (OuterVolumeSpecName: "config") pod "6a7bc6af-ed42-49e9-809a-1716dc426216" (UID: "6a7bc6af-ed42-49e9-809a-1716dc426216"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.834439 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7bc6af-ed42-49e9-809a-1716dc426216-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6a7bc6af-ed42-49e9-809a-1716dc426216" (UID: "6a7bc6af-ed42-49e9-809a-1716dc426216"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.836433 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7bc6af-ed42-49e9-809a-1716dc426216-kube-api-access-fp22j" (OuterVolumeSpecName: "kube-api-access-fp22j") pod "6a7bc6af-ed42-49e9-809a-1716dc426216" (UID: "6a7bc6af-ed42-49e9-809a-1716dc426216"). InnerVolumeSpecName "kube-api-access-fp22j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.928399 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.928429 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.928438 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a7bc6af-ed42-49e9-809a-1716dc426216-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.928447 4757 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a7bc6af-ed42-49e9-809a-1716dc426216-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:37 crc kubenswrapper[4757]: I0129 15:16:37.928457 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp22j\" (UniqueName: \"kubernetes.io/projected/6a7bc6af-ed42-49e9-809a-1716dc426216-kube-api-access-fp22j\") on node \"crc\" DevicePath \"\"" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.002428 4757 generic.go:334] "Generic (PLEG): container finished" podID="6a7bc6af-ed42-49e9-809a-1716dc426216" containerID="b8ca8ff6e18b3ac996fa70862606c9f8e114388c3af584d7b7360f5f6b7cae89" exitCode=0 Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.002487 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" event={"ID":"6a7bc6af-ed42-49e9-809a-1716dc426216","Type":"ContainerDied","Data":"b8ca8ff6e18b3ac996fa70862606c9f8e114388c3af584d7b7360f5f6b7cae89"} Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.002544 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" event={"ID":"6a7bc6af-ed42-49e9-809a-1716dc426216","Type":"ContainerDied","Data":"330c674d8d8f0a18e268f0d20a30bbf7e44dfd187a358295a87185557f2b9cdf"} Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.002563 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.002571 4757 scope.go:117] "RemoveContainer" containerID="b8ca8ff6e18b3ac996fa70862606c9f8e114388c3af584d7b7360f5f6b7cae89" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.026621 4757 scope.go:117] "RemoveContainer" containerID="b8ca8ff6e18b3ac996fa70862606c9f8e114388c3af584d7b7360f5f6b7cae89" Jan 29 15:16:38 crc kubenswrapper[4757]: E0129 15:16:38.027427 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8ca8ff6e18b3ac996fa70862606c9f8e114388c3af584d7b7360f5f6b7cae89\": container with ID starting with b8ca8ff6e18b3ac996fa70862606c9f8e114388c3af584d7b7360f5f6b7cae89 not found: ID does not exist" containerID="b8ca8ff6e18b3ac996fa70862606c9f8e114388c3af584d7b7360f5f6b7cae89" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.027453 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8ca8ff6e18b3ac996fa70862606c9f8e114388c3af584d7b7360f5f6b7cae89"} err="failed to get container status \"b8ca8ff6e18b3ac996fa70862606c9f8e114388c3af584d7b7360f5f6b7cae89\": rpc error: code = NotFound desc = could not find container \"b8ca8ff6e18b3ac996fa70862606c9f8e114388c3af584d7b7360f5f6b7cae89\": container with ID starting with b8ca8ff6e18b3ac996fa70862606c9f8e114388c3af584d7b7360f5f6b7cae89 not found: ID does not exist" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.045164 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv"] Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.050806 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cb96b48f7-vfkvv"] Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.406679 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-846d6678c7-xj4js"] Jan 29 15:16:38 crc kubenswrapper[4757]: E0129 15:16:38.406963 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7bc6af-ed42-49e9-809a-1716dc426216" containerName="controller-manager" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.406979 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7bc6af-ed42-49e9-809a-1716dc426216" containerName="controller-manager" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.410669 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7bc6af-ed42-49e9-809a-1716dc426216" containerName="controller-manager" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.411135 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.417565 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.417691 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.417798 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.417916 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.418026 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.419576 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-846d6678c7-xj4js"] Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.463283 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.471935 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.533758 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glhtr\" (UniqueName: \"kubernetes.io/projected/6d71bcc8-af0f-4769-927b-915ca3eb7692-kube-api-access-glhtr\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.533827 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d71bcc8-af0f-4769-927b-915ca3eb7692-client-ca\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.533878 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d71bcc8-af0f-4769-927b-915ca3eb7692-proxy-ca-bundles\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.534028 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d71bcc8-af0f-4769-927b-915ca3eb7692-config\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.534085 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d71bcc8-af0f-4769-927b-915ca3eb7692-serving-cert\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.636082 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glhtr\" (UniqueName: \"kubernetes.io/projected/6d71bcc8-af0f-4769-927b-915ca3eb7692-kube-api-access-glhtr\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.636146 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d71bcc8-af0f-4769-927b-915ca3eb7692-client-ca\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.636200 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d71bcc8-af0f-4769-927b-915ca3eb7692-proxy-ca-bundles\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.636242 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d71bcc8-af0f-4769-927b-915ca3eb7692-config\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.636279 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d71bcc8-af0f-4769-927b-915ca3eb7692-serving-cert\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.637676 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6d71bcc8-af0f-4769-927b-915ca3eb7692-proxy-ca-bundles\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.637715 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6d71bcc8-af0f-4769-927b-915ca3eb7692-client-ca\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.638210 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d71bcc8-af0f-4769-927b-915ca3eb7692-config\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.644407 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d71bcc8-af0f-4769-927b-915ca3eb7692-serving-cert\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.653881 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glhtr\" (UniqueName: \"kubernetes.io/projected/6d71bcc8-af0f-4769-927b-915ca3eb7692-kube-api-access-glhtr\") pod \"controller-manager-846d6678c7-xj4js\" (UID: \"6d71bcc8-af0f-4769-927b-915ca3eb7692\") " pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:38 crc kubenswrapper[4757]: I0129 15:16:38.764533 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:39 crc kubenswrapper[4757]: I0129 15:16:39.149118 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-846d6678c7-xj4js"] Jan 29 15:16:39 crc kubenswrapper[4757]: I0129 15:16:39.403940 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7bc6af-ed42-49e9-809a-1716dc426216" path="/var/lib/kubelet/pods/6a7bc6af-ed42-49e9-809a-1716dc426216/volumes" Jan 29 15:16:40 crc kubenswrapper[4757]: I0129 15:16:40.013857 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" event={"ID":"6d71bcc8-af0f-4769-927b-915ca3eb7692","Type":"ContainerStarted","Data":"ec578525eb2732aa0dfbbe5987963644cfb77329a56cce40af31b14bd59965d2"} Jan 29 15:16:40 crc kubenswrapper[4757]: I0129 15:16:40.014883 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" event={"ID":"6d71bcc8-af0f-4769-927b-915ca3eb7692","Type":"ContainerStarted","Data":"115ebe52dff823698e3118a667adfbade9cba31182fcb8650b056bb6c3738360"} Jan 29 15:16:40 crc kubenswrapper[4757]: I0129 15:16:40.015026 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:40 crc kubenswrapper[4757]: I0129 15:16:40.018386 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" Jan 29 15:16:40 crc kubenswrapper[4757]: I0129 15:16:40.035883 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-846d6678c7-xj4js" podStartSLOduration=3.035865528 podStartE2EDuration="3.035865528s" podCreationTimestamp="2026-01-29 15:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:16:40.030833519 +0000 UTC m=+363.320083766" watchObservedRunningTime="2026-01-29 15:16:40.035865528 +0000 UTC m=+363.325115765" Jan 29 15:16:41 crc kubenswrapper[4757]: E0129 15:16:41.399787 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:16:41 crc kubenswrapper[4757]: E0129 15:16:41.400842 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:16:44 crc kubenswrapper[4757]: E0129 15:16:44.397744 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:16:44 crc kubenswrapper[4757]: E0129 15:16:44.398180 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:16:45 crc kubenswrapper[4757]: E0129 15:16:45.397682 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:16:46 crc kubenswrapper[4757]: E0129 15:16:46.399625 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:16:47 crc kubenswrapper[4757]: I0129 15:16:47.604662 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:16:47 crc kubenswrapper[4757]: I0129 15:16:47.604738 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:16:48 crc kubenswrapper[4757]: E0129 15:16:48.397294 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:16:50 crc kubenswrapper[4757]: E0129 15:16:50.400999 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:16:55 crc kubenswrapper[4757]: E0129 15:16:55.398060 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:16:57 crc kubenswrapper[4757]: E0129 15:16:57.730661 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:16:57 crc kubenswrapper[4757]: E0129 15:16:57.734052 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:16:57 crc kubenswrapper[4757]: E0129 15:16:57.737591 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:16:57 crc kubenswrapper[4757]: E0129 15:16:57.742440 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:16:59 crc kubenswrapper[4757]: E0129 15:16:59.401720 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:17:01 crc kubenswrapper[4757]: E0129 15:17:01.517819 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:17:01 crc kubenswrapper[4757]: E0129 15:17:01.518345 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rr9fs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-99p4m_openshift-marketplace(6f40510d-f93a-4a84-ad4a-e503fa0bdf09): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:17:01 crc kubenswrapper[4757]: E0129 15:17:01.519488 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:17:05 crc kubenswrapper[4757]: E0129 15:17:05.398657 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:17:09 crc kubenswrapper[4757]: E0129 15:17:09.398086 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" Jan 29 15:17:10 crc kubenswrapper[4757]: E0129 15:17:10.398059 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" Jan 29 15:17:10 crc kubenswrapper[4757]: E0129 15:17:10.398058 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:17:10 crc kubenswrapper[4757]: E0129 15:17:10.512249 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:17:10 crc kubenswrapper[4757]: E0129 15:17:10.512408 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgwj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jhlrf_openshift-marketplace(92724a14-21db-441f-b509-142dc0a8dc15): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:17:10 crc kubenswrapper[4757]: E0129 15:17:10.515421 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:17:11 crc kubenswrapper[4757]: E0129 15:17:11.397853 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" Jan 29 15:17:11 crc kubenswrapper[4757]: E0129 15:17:11.516989 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:17:11 crc kubenswrapper[4757]: E0129 15:17:11.517390 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-swndd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-v8v75_openshift-marketplace(bce413ab-1d96-4e66-b700-db27f6b52966): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:17:11 crc kubenswrapper[4757]: E0129 15:17:11.518582 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:17:13 crc kubenswrapper[4757]: E0129 15:17:13.398232 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.561996 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9fxzp"] Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.562864 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.579825 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9fxzp"] Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.618410 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f17295f-96e3-4687-adea-934b8812c6e6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.618757 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f17295f-96e3-4687-adea-934b8812c6e6-bound-sa-token\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.618906 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f17295f-96e3-4687-adea-934b8812c6e6-trusted-ca\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.619033 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.619166 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f17295f-96e3-4687-adea-934b8812c6e6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.619341 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfl5l\" (UniqueName: \"kubernetes.io/projected/9f17295f-96e3-4687-adea-934b8812c6e6-kube-api-access-nfl5l\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.619471 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f17295f-96e3-4687-adea-934b8812c6e6-registry-tls\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.619653 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f17295f-96e3-4687-adea-934b8812c6e6-registry-certificates\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.654977 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.720424 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f17295f-96e3-4687-adea-934b8812c6e6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.720490 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f17295f-96e3-4687-adea-934b8812c6e6-bound-sa-token\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.720537 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f17295f-96e3-4687-adea-934b8812c6e6-trusted-ca\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.720571 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f17295f-96e3-4687-adea-934b8812c6e6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.720593 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfl5l\" (UniqueName: \"kubernetes.io/projected/9f17295f-96e3-4687-adea-934b8812c6e6-kube-api-access-nfl5l\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.720614 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f17295f-96e3-4687-adea-934b8812c6e6-registry-tls\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.720666 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f17295f-96e3-4687-adea-934b8812c6e6-registry-certificates\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.721637 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f17295f-96e3-4687-adea-934b8812c6e6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.721947 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f17295f-96e3-4687-adea-934b8812c6e6-registry-certificates\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.723045 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f17295f-96e3-4687-adea-934b8812c6e6-trusted-ca\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.732034 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f17295f-96e3-4687-adea-934b8812c6e6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.732121 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f17295f-96e3-4687-adea-934b8812c6e6-registry-tls\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.736988 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f17295f-96e3-4687-adea-934b8812c6e6-bound-sa-token\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.741408 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfl5l\" (UniqueName: \"kubernetes.io/projected/9f17295f-96e3-4687-adea-934b8812c6e6-kube-api-access-nfl5l\") pod \"image-registry-66df7c8f76-9fxzp\" (UID: \"9f17295f-96e3-4687-adea-934b8812c6e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:15 crc kubenswrapper[4757]: I0129 15:17:15.883712 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:16 crc kubenswrapper[4757]: I0129 15:17:16.276886 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9fxzp"] Jan 29 15:17:16 crc kubenswrapper[4757]: W0129 15:17:16.284446 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f17295f_96e3_4687_adea_934b8812c6e6.slice/crio-632ebf02ac7488d853cc898bb092c044232622b5a57db1c5d2b3d27acad6548b WatchSource:0}: Error finding container 632ebf02ac7488d853cc898bb092c044232622b5a57db1c5d2b3d27acad6548b: Status 404 returned error can't find the container with id 632ebf02ac7488d853cc898bb092c044232622b5a57db1c5d2b3d27acad6548b Jan 29 15:17:16 crc kubenswrapper[4757]: E0129 15:17:16.529177 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:17:16 crc kubenswrapper[4757]: E0129 15:17:16.529392 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bg5b9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-57qth_openshift-marketplace(d4596539-1be7-44ac-8e25-3fd37c823166): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:17:16 crc kubenswrapper[4757]: E0129 15:17:16.530824 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.160399 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9"] Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.162125 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" podUID="f4f9e864-01cf-4960-8faf-a06fb3934a5a" containerName="route-controller-manager" containerID="cri-o://ba9df93c5e1999326415e8478c1cee0f2c798888338db3e5b27c3d17e2ff11c7" gracePeriod=30 Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.210711 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" event={"ID":"9f17295f-96e3-4687-adea-934b8812c6e6","Type":"ContainerStarted","Data":"6c8dba8c65452a536f0a4c95a6a6ea9d774434bbbc9e2b4ebbee20797682298a"} Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.210957 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" event={"ID":"9f17295f-96e3-4687-adea-934b8812c6e6","Type":"ContainerStarted","Data":"632ebf02ac7488d853cc898bb092c044232622b5a57db1c5d2b3d27acad6548b"} Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.211057 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.228985 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" podStartSLOduration=2.228961476 podStartE2EDuration="2.228961476s" podCreationTimestamp="2026-01-29 15:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:17:17.224468246 +0000 UTC m=+400.513718503" watchObservedRunningTime="2026-01-29 15:17:17.228961476 +0000 UTC m=+400.518211713" Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.605169 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.605245 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.638160 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.741980 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4f9e864-01cf-4960-8faf-a06fb3934a5a-client-ca\") pod \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.742024 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f9e864-01cf-4960-8faf-a06fb3934a5a-config\") pod \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.742067 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f9e864-01cf-4960-8faf-a06fb3934a5a-serving-cert\") pod \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.742121 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmvmj\" (UniqueName: \"kubernetes.io/projected/f4f9e864-01cf-4960-8faf-a06fb3934a5a-kube-api-access-nmvmj\") pod \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\" (UID: \"f4f9e864-01cf-4960-8faf-a06fb3934a5a\") " Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.743483 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4f9e864-01cf-4960-8faf-a06fb3934a5a-config" (OuterVolumeSpecName: "config") pod "f4f9e864-01cf-4960-8faf-a06fb3934a5a" (UID: "f4f9e864-01cf-4960-8faf-a06fb3934a5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.743628 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4f9e864-01cf-4960-8faf-a06fb3934a5a-client-ca" (OuterVolumeSpecName: "client-ca") pod "f4f9e864-01cf-4960-8faf-a06fb3934a5a" (UID: "f4f9e864-01cf-4960-8faf-a06fb3934a5a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.758394 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f9e864-01cf-4960-8faf-a06fb3934a5a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f4f9e864-01cf-4960-8faf-a06fb3934a5a" (UID: "f4f9e864-01cf-4960-8faf-a06fb3934a5a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.763750 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f9e864-01cf-4960-8faf-a06fb3934a5a-kube-api-access-nmvmj" (OuterVolumeSpecName: "kube-api-access-nmvmj") pod "f4f9e864-01cf-4960-8faf-a06fb3934a5a" (UID: "f4f9e864-01cf-4960-8faf-a06fb3934a5a"). InnerVolumeSpecName "kube-api-access-nmvmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.843232 4757 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4f9e864-01cf-4960-8faf-a06fb3934a5a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.843296 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmvmj\" (UniqueName: \"kubernetes.io/projected/f4f9e864-01cf-4960-8faf-a06fb3934a5a-kube-api-access-nmvmj\") on node \"crc\" DevicePath \"\"" Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.843316 4757 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4f9e864-01cf-4960-8faf-a06fb3934a5a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:17:17 crc kubenswrapper[4757]: I0129 15:17:17.843327 4757 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4f9e864-01cf-4960-8faf-a06fb3934a5a-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.218529 4757 generic.go:334] "Generic (PLEG): container finished" podID="f4f9e864-01cf-4960-8faf-a06fb3934a5a" containerID="ba9df93c5e1999326415e8478c1cee0f2c798888338db3e5b27c3d17e2ff11c7" exitCode=0 Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.218610 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" event={"ID":"f4f9e864-01cf-4960-8faf-a06fb3934a5a","Type":"ContainerDied","Data":"ba9df93c5e1999326415e8478c1cee0f2c798888338db3e5b27c3d17e2ff11c7"} Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.218689 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" event={"ID":"f4f9e864-01cf-4960-8faf-a06fb3934a5a","Type":"ContainerDied","Data":"c7da0a3090a496ccb041377c1c6dbdddaded002b1edddd7029c539abe4b1c8ae"} Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.218720 4757 scope.go:117] "RemoveContainer" containerID="ba9df93c5e1999326415e8478c1cee0f2c798888338db3e5b27c3d17e2ff11c7" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.221750 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.237237 4757 scope.go:117] "RemoveContainer" containerID="ba9df93c5e1999326415e8478c1cee0f2c798888338db3e5b27c3d17e2ff11c7" Jan 29 15:17:18 crc kubenswrapper[4757]: E0129 15:17:18.238057 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba9df93c5e1999326415e8478c1cee0f2c798888338db3e5b27c3d17e2ff11c7\": container with ID starting with ba9df93c5e1999326415e8478c1cee0f2c798888338db3e5b27c3d17e2ff11c7 not found: ID does not exist" containerID="ba9df93c5e1999326415e8478c1cee0f2c798888338db3e5b27c3d17e2ff11c7" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.238104 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba9df93c5e1999326415e8478c1cee0f2c798888338db3e5b27c3d17e2ff11c7"} err="failed to get container status \"ba9df93c5e1999326415e8478c1cee0f2c798888338db3e5b27c3d17e2ff11c7\": rpc error: code = NotFound desc = could not find container \"ba9df93c5e1999326415e8478c1cee0f2c798888338db3e5b27c3d17e2ff11c7\": container with ID starting with ba9df93c5e1999326415e8478c1cee0f2c798888338db3e5b27c3d17e2ff11c7 not found: ID does not exist" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.256671 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9"] Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.260077 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dcb9544cc-9n8d9"] Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.719881 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn"] Jan 29 15:17:18 crc kubenswrapper[4757]: E0129 15:17:18.720416 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f9e864-01cf-4960-8faf-a06fb3934a5a" containerName="route-controller-manager" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.720450 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f9e864-01cf-4960-8faf-a06fb3934a5a" containerName="route-controller-manager" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.720561 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f9e864-01cf-4960-8faf-a06fb3934a5a" containerName="route-controller-manager" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.720943 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.723970 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.724138 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.724489 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.724580 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.724755 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.724854 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.741375 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn"] Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.909344 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da1968f2-a13b-4444-be66-5274eebf2a39-serving-cert\") pod \"route-controller-manager-6cd7995f46-8wxqn\" (UID: \"da1968f2-a13b-4444-be66-5274eebf2a39\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.909608 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fshfk\" (UniqueName: \"kubernetes.io/projected/da1968f2-a13b-4444-be66-5274eebf2a39-kube-api-access-fshfk\") pod \"route-controller-manager-6cd7995f46-8wxqn\" (UID: \"da1968f2-a13b-4444-be66-5274eebf2a39\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.909662 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da1968f2-a13b-4444-be66-5274eebf2a39-config\") pod \"route-controller-manager-6cd7995f46-8wxqn\" (UID: \"da1968f2-a13b-4444-be66-5274eebf2a39\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:18 crc kubenswrapper[4757]: I0129 15:17:18.909736 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da1968f2-a13b-4444-be66-5274eebf2a39-client-ca\") pod \"route-controller-manager-6cd7995f46-8wxqn\" (UID: \"da1968f2-a13b-4444-be66-5274eebf2a39\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:19 crc kubenswrapper[4757]: I0129 15:17:19.011304 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da1968f2-a13b-4444-be66-5274eebf2a39-config\") pod \"route-controller-manager-6cd7995f46-8wxqn\" (UID: \"da1968f2-a13b-4444-be66-5274eebf2a39\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:19 crc kubenswrapper[4757]: I0129 15:17:19.011391 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da1968f2-a13b-4444-be66-5274eebf2a39-client-ca\") pod \"route-controller-manager-6cd7995f46-8wxqn\" (UID: \"da1968f2-a13b-4444-be66-5274eebf2a39\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:19 crc kubenswrapper[4757]: I0129 15:17:19.011416 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da1968f2-a13b-4444-be66-5274eebf2a39-serving-cert\") pod \"route-controller-manager-6cd7995f46-8wxqn\" (UID: \"da1968f2-a13b-4444-be66-5274eebf2a39\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:19 crc kubenswrapper[4757]: I0129 15:17:19.011473 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fshfk\" (UniqueName: \"kubernetes.io/projected/da1968f2-a13b-4444-be66-5274eebf2a39-kube-api-access-fshfk\") pod \"route-controller-manager-6cd7995f46-8wxqn\" (UID: \"da1968f2-a13b-4444-be66-5274eebf2a39\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:19 crc kubenswrapper[4757]: I0129 15:17:19.012650 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da1968f2-a13b-4444-be66-5274eebf2a39-client-ca\") pod \"route-controller-manager-6cd7995f46-8wxqn\" (UID: \"da1968f2-a13b-4444-be66-5274eebf2a39\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:19 crc kubenswrapper[4757]: I0129 15:17:19.012737 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da1968f2-a13b-4444-be66-5274eebf2a39-config\") pod \"route-controller-manager-6cd7995f46-8wxqn\" (UID: \"da1968f2-a13b-4444-be66-5274eebf2a39\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:19 crc kubenswrapper[4757]: I0129 15:17:19.017619 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da1968f2-a13b-4444-be66-5274eebf2a39-serving-cert\") pod \"route-controller-manager-6cd7995f46-8wxqn\" (UID: \"da1968f2-a13b-4444-be66-5274eebf2a39\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:19 crc kubenswrapper[4757]: I0129 15:17:19.028448 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fshfk\" (UniqueName: \"kubernetes.io/projected/da1968f2-a13b-4444-be66-5274eebf2a39-kube-api-access-fshfk\") pod \"route-controller-manager-6cd7995f46-8wxqn\" (UID: \"da1968f2-a13b-4444-be66-5274eebf2a39\") " pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:19 crc kubenswrapper[4757]: I0129 15:17:19.112474 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:19 crc kubenswrapper[4757]: I0129 15:17:19.409432 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f9e864-01cf-4960-8faf-a06fb3934a5a" path="/var/lib/kubelet/pods/f4f9e864-01cf-4960-8faf-a06fb3934a5a/volumes" Jan 29 15:17:19 crc kubenswrapper[4757]: I0129 15:17:19.507595 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn"] Jan 29 15:17:20 crc kubenswrapper[4757]: I0129 15:17:20.234458 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" event={"ID":"da1968f2-a13b-4444-be66-5274eebf2a39","Type":"ContainerStarted","Data":"ef9305bfbae63a1ecc06144b37002146c7ccf5e895643ebcdb62834b798e3906"} Jan 29 15:17:20 crc kubenswrapper[4757]: I0129 15:17:20.235084 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:20 crc kubenswrapper[4757]: I0129 15:17:20.235104 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" event={"ID":"da1968f2-a13b-4444-be66-5274eebf2a39","Type":"ContainerStarted","Data":"5246d2af80f0abaee1209f3fe424d4eb1271562f8620619245d4e66cd84eddd6"} Jan 29 15:17:20 crc kubenswrapper[4757]: I0129 15:17:20.254906 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" podStartSLOduration=3.254885131 podStartE2EDuration="3.254885131s" podCreationTimestamp="2026-01-29 15:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:17:20.252182953 +0000 UTC m=+403.541433190" watchObservedRunningTime="2026-01-29 15:17:20.254885131 +0000 UTC m=+403.544135368" Jan 29 15:17:20 crc kubenswrapper[4757]: I0129 15:17:20.287988 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cd7995f46-8wxqn" Jan 29 15:17:21 crc kubenswrapper[4757]: E0129 15:17:21.518988 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:17:21 crc kubenswrapper[4757]: E0129 15:17:21.519435 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rm2n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2jc8z_openshift-marketplace(43de85f7-11df-4e6f-8d3f-b982b03ce802): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:17:21 crc kubenswrapper[4757]: E0129 15:17:21.520656 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:17:25 crc kubenswrapper[4757]: E0129 15:17:25.751023 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:17:25 crc kubenswrapper[4757]: E0129 15:17:25.755680 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:17:26 crc kubenswrapper[4757]: E0129 15:17:26.397862 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:17:27 crc kubenswrapper[4757]: I0129 15:17:27.275427 4757 generic.go:334] "Generic (PLEG): container finished" podID="f2342b27-9060-4697-a957-65d07f099e82" containerID="5e558c3be8fc50586498e6ec6235e7eabc24b581c5ea34900c084161676b8434" exitCode=0 Jan 29 15:17:27 crc kubenswrapper[4757]: I0129 15:17:27.275528 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btp4k" event={"ID":"f2342b27-9060-4697-a957-65d07f099e82","Type":"ContainerDied","Data":"5e558c3be8fc50586498e6ec6235e7eabc24b581c5ea34900c084161676b8434"} Jan 29 15:17:29 crc kubenswrapper[4757]: I0129 15:17:29.287008 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pw7" event={"ID":"4e10b6b9-259a-417c-ba5d-311e75543637","Type":"ContainerStarted","Data":"b1368a16f948e4ee35b4e207ae257574618fe7728f19702d23cbd5f46eb18e2a"} Jan 29 15:17:29 crc kubenswrapper[4757]: I0129 15:17:29.289845 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxw6w" event={"ID":"fd7070d7-3870-49f1-8976-094ad97b6efc","Type":"ContainerStarted","Data":"ce9e92422cd8f6e38a33f4e859c2d536c33a4d4a6e19e8202ec753ada698e94d"} Jan 29 15:17:29 crc kubenswrapper[4757]: E0129 15:17:29.813461 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:17:30 crc kubenswrapper[4757]: I0129 15:17:30.297172 4757 generic.go:334] "Generic (PLEG): container finished" podID="4e10b6b9-259a-417c-ba5d-311e75543637" containerID="b1368a16f948e4ee35b4e207ae257574618fe7728f19702d23cbd5f46eb18e2a" exitCode=0 Jan 29 15:17:30 crc kubenswrapper[4757]: I0129 15:17:30.297247 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pw7" event={"ID":"4e10b6b9-259a-417c-ba5d-311e75543637","Type":"ContainerDied","Data":"b1368a16f948e4ee35b4e207ae257574618fe7728f19702d23cbd5f46eb18e2a"} Jan 29 15:17:30 crc kubenswrapper[4757]: I0129 15:17:30.302473 4757 generic.go:334] "Generic (PLEG): container finished" podID="fd7070d7-3870-49f1-8976-094ad97b6efc" containerID="ce9e92422cd8f6e38a33f4e859c2d536c33a4d4a6e19e8202ec753ada698e94d" exitCode=0 Jan 29 15:17:30 crc kubenswrapper[4757]: I0129 15:17:30.302561 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxw6w" event={"ID":"fd7070d7-3870-49f1-8976-094ad97b6efc","Type":"ContainerDied","Data":"ce9e92422cd8f6e38a33f4e859c2d536c33a4d4a6e19e8202ec753ada698e94d"} Jan 29 15:17:30 crc kubenswrapper[4757]: I0129 15:17:30.305557 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btp4k" event={"ID":"f2342b27-9060-4697-a957-65d07f099e82","Type":"ContainerStarted","Data":"0b4f8fe30dd05de38558ad00dc10f8984bb925f4887091e66d4ea69bcfd34352"} Jan 29 15:17:31 crc kubenswrapper[4757]: I0129 15:17:31.329530 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-btp4k" podStartSLOduration=3.538103635 podStartE2EDuration="4m14.3295108s" podCreationTimestamp="2026-01-29 15:13:17 +0000 UTC" firstStartedPulling="2026-01-29 15:13:19.023371345 +0000 UTC m=+162.312621582" lastFinishedPulling="2026-01-29 15:17:29.8147785 +0000 UTC m=+413.104028747" observedRunningTime="2026-01-29 15:17:31.325581946 +0000 UTC m=+414.614832213" watchObservedRunningTime="2026-01-29 15:17:31.3295108 +0000 UTC m=+414.618761037" Jan 29 15:17:33 crc kubenswrapper[4757]: I0129 15:17:33.325362 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pw7" event={"ID":"4e10b6b9-259a-417c-ba5d-311e75543637","Type":"ContainerStarted","Data":"363fee6a03459c099d21dfb97c176c688a0b6a84b8e42c7f1644358ae3a4710c"} Jan 29 15:17:35 crc kubenswrapper[4757]: I0129 15:17:35.889010 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9fxzp" Jan 29 15:17:35 crc kubenswrapper[4757]: I0129 15:17:35.918412 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c5pw7" podStartSLOduration=6.128946287 podStartE2EDuration="4m19.91839093s" podCreationTimestamp="2026-01-29 15:13:16 +0000 UTC" firstStartedPulling="2026-01-29 15:13:19.06358293 +0000 UTC m=+162.352833167" lastFinishedPulling="2026-01-29 15:17:32.853027573 +0000 UTC m=+416.142277810" observedRunningTime="2026-01-29 15:17:34.354996183 +0000 UTC m=+417.644246430" watchObservedRunningTime="2026-01-29 15:17:35.91839093 +0000 UTC m=+419.207641167" Jan 29 15:17:35 crc kubenswrapper[4757]: I0129 15:17:35.943726 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kjgkg"] Jan 29 15:17:36 crc kubenswrapper[4757]: I0129 15:17:36.344548 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxw6w" event={"ID":"fd7070d7-3870-49f1-8976-094ad97b6efc","Type":"ContainerStarted","Data":"a28daf933c4de609d293716297600aedbfbb676349a7c4b6ab81ff36fbbedb02"} Jan 29 15:17:36 crc kubenswrapper[4757]: I0129 15:17:36.370846 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pxw6w" podStartSLOduration=6.232213696 podStartE2EDuration="4m21.370827784s" podCreationTimestamp="2026-01-29 15:13:15 +0000 UTC" firstStartedPulling="2026-01-29 15:13:19.089207094 +0000 UTC m=+162.378457331" lastFinishedPulling="2026-01-29 15:17:34.227821192 +0000 UTC m=+417.517071419" observedRunningTime="2026-01-29 15:17:36.366350685 +0000 UTC m=+419.655600932" watchObservedRunningTime="2026-01-29 15:17:36.370827784 +0000 UTC m=+419.660078021" Jan 29 15:17:36 crc kubenswrapper[4757]: E0129 15:17:36.397756 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:17:36 crc kubenswrapper[4757]: I0129 15:17:36.686669 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:17:36 crc kubenswrapper[4757]: I0129 15:17:36.686715 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:17:37 crc kubenswrapper[4757]: I0129 15:17:37.247819 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:17:37 crc kubenswrapper[4757]: E0129 15:17:37.406318 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:17:38 crc kubenswrapper[4757]: I0129 15:17:38.262763 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:17:38 crc kubenswrapper[4757]: I0129 15:17:38.263313 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:17:38 crc kubenswrapper[4757]: I0129 15:17:38.309499 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:17:38 crc kubenswrapper[4757]: I0129 15:17:38.393504 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:17:38 crc kubenswrapper[4757]: I0129 15:17:38.398008 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:17:39 crc kubenswrapper[4757]: I0129 15:17:39.809358 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c5pw7"] Jan 29 15:17:40 crc kubenswrapper[4757]: I0129 15:17:40.366581 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c5pw7" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" containerName="registry-server" containerID="cri-o://363fee6a03459c099d21dfb97c176c688a0b6a84b8e42c7f1644358ae3a4710c" gracePeriod=2 Jan 29 15:17:40 crc kubenswrapper[4757]: E0129 15:17:40.397842 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:17:41 crc kubenswrapper[4757]: I0129 15:17:41.374627 4757 generic.go:334] "Generic (PLEG): container finished" podID="4e10b6b9-259a-417c-ba5d-311e75543637" containerID="363fee6a03459c099d21dfb97c176c688a0b6a84b8e42c7f1644358ae3a4710c" exitCode=0 Jan 29 15:17:41 crc kubenswrapper[4757]: I0129 15:17:41.374706 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pw7" event={"ID":"4e10b6b9-259a-417c-ba5d-311e75543637","Type":"ContainerDied","Data":"363fee6a03459c099d21dfb97c176c688a0b6a84b8e42c7f1644358ae3a4710c"} Jan 29 15:17:41 crc kubenswrapper[4757]: E0129 15:17:41.400032 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:17:42 crc kubenswrapper[4757]: E0129 15:17:42.399403 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:17:42 crc kubenswrapper[4757]: I0129 15:17:42.838669 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:17:42 crc kubenswrapper[4757]: I0129 15:17:42.871020 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e10b6b9-259a-417c-ba5d-311e75543637-catalog-content\") pod \"4e10b6b9-259a-417c-ba5d-311e75543637\" (UID: \"4e10b6b9-259a-417c-ba5d-311e75543637\") " Jan 29 15:17:42 crc kubenswrapper[4757]: I0129 15:17:42.871106 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8p6x\" (UniqueName: \"kubernetes.io/projected/4e10b6b9-259a-417c-ba5d-311e75543637-kube-api-access-d8p6x\") pod \"4e10b6b9-259a-417c-ba5d-311e75543637\" (UID: \"4e10b6b9-259a-417c-ba5d-311e75543637\") " Jan 29 15:17:42 crc kubenswrapper[4757]: I0129 15:17:42.871165 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e10b6b9-259a-417c-ba5d-311e75543637-utilities\") pod \"4e10b6b9-259a-417c-ba5d-311e75543637\" (UID: \"4e10b6b9-259a-417c-ba5d-311e75543637\") " Jan 29 15:17:42 crc kubenswrapper[4757]: I0129 15:17:42.872164 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e10b6b9-259a-417c-ba5d-311e75543637-utilities" (OuterVolumeSpecName: "utilities") pod "4e10b6b9-259a-417c-ba5d-311e75543637" (UID: "4e10b6b9-259a-417c-ba5d-311e75543637"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:17:42 crc kubenswrapper[4757]: I0129 15:17:42.885862 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e10b6b9-259a-417c-ba5d-311e75543637-kube-api-access-d8p6x" (OuterVolumeSpecName: "kube-api-access-d8p6x") pod "4e10b6b9-259a-417c-ba5d-311e75543637" (UID: "4e10b6b9-259a-417c-ba5d-311e75543637"). InnerVolumeSpecName "kube-api-access-d8p6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:17:42 crc kubenswrapper[4757]: I0129 15:17:42.973096 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8p6x\" (UniqueName: \"kubernetes.io/projected/4e10b6b9-259a-417c-ba5d-311e75543637-kube-api-access-d8p6x\") on node \"crc\" DevicePath \"\"" Jan 29 15:17:42 crc kubenswrapper[4757]: I0129 15:17:42.973127 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e10b6b9-259a-417c-ba5d-311e75543637-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:17:43 crc kubenswrapper[4757]: I0129 15:17:43.385127 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5pw7" event={"ID":"4e10b6b9-259a-417c-ba5d-311e75543637","Type":"ContainerDied","Data":"6a6dbacad046d1e4bfe9f9815d1b409eeee9beb1230dbcbfea6a464f685c534f"} Jan 29 15:17:43 crc kubenswrapper[4757]: I0129 15:17:43.385184 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5pw7" Jan 29 15:17:43 crc kubenswrapper[4757]: I0129 15:17:43.385211 4757 scope.go:117] "RemoveContainer" containerID="363fee6a03459c099d21dfb97c176c688a0b6a84b8e42c7f1644358ae3a4710c" Jan 29 15:17:43 crc kubenswrapper[4757]: I0129 15:17:43.402493 4757 scope.go:117] "RemoveContainer" containerID="b1368a16f948e4ee35b4e207ae257574618fe7728f19702d23cbd5f46eb18e2a" Jan 29 15:17:43 crc kubenswrapper[4757]: I0129 15:17:43.419324 4757 scope.go:117] "RemoveContainer" containerID="33eb487cf9d4a6747b5b8e508373ba7db0db7d9788634cd1c52c29cae619e103" Jan 29 15:17:46 crc kubenswrapper[4757]: I0129 15:17:46.018395 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e10b6b9-259a-417c-ba5d-311e75543637-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e10b6b9-259a-417c-ba5d-311e75543637" (UID: "4e10b6b9-259a-417c-ba5d-311e75543637"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:17:46 crc kubenswrapper[4757]: I0129 15:17:46.018779 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e10b6b9-259a-417c-ba5d-311e75543637-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:17:46 crc kubenswrapper[4757]: I0129 15:17:46.111951 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c5pw7"] Jan 29 15:17:46 crc kubenswrapper[4757]: I0129 15:17:46.117021 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c5pw7"] Jan 29 15:17:46 crc kubenswrapper[4757]: I0129 15:17:46.258780 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:17:46 crc kubenswrapper[4757]: I0129 15:17:46.258919 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:17:46 crc kubenswrapper[4757]: I0129 15:17:46.308243 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:17:46 crc kubenswrapper[4757]: I0129 15:17:46.444891 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:17:47 crc kubenswrapper[4757]: I0129 15:17:47.404012 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" path="/var/lib/kubelet/pods/4e10b6b9-259a-417c-ba5d-311e75543637/volumes" Jan 29 15:17:47 crc kubenswrapper[4757]: I0129 15:17:47.605096 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:17:47 crc kubenswrapper[4757]: I0129 15:17:47.605200 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:17:47 crc kubenswrapper[4757]: I0129 15:17:47.605330 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:17:47 crc kubenswrapper[4757]: I0129 15:17:47.606906 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"989f6c946474d5c13e79a0e6cd5a831a42488fc707f84bbd376773aebb6df314"} pod="openshift-machine-config-operator/machine-config-daemon-45q8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:17:47 crc kubenswrapper[4757]: I0129 15:17:47.607011 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" containerID="cri-o://989f6c946474d5c13e79a0e6cd5a831a42488fc707f84bbd376773aebb6df314" gracePeriod=600 Jan 29 15:17:48 crc kubenswrapper[4757]: I0129 15:17:48.416993 4757 generic.go:334] "Generic (PLEG): container finished" podID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerID="989f6c946474d5c13e79a0e6cd5a831a42488fc707f84bbd376773aebb6df314" exitCode=0 Jan 29 15:17:48 crc kubenswrapper[4757]: I0129 15:17:48.417704 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerDied","Data":"989f6c946474d5c13e79a0e6cd5a831a42488fc707f84bbd376773aebb6df314"} Jan 29 15:17:48 crc kubenswrapper[4757]: I0129 15:17:48.417747 4757 scope.go:117] "RemoveContainer" containerID="4f2d9d1c4d36b89e8d99b8b8e26d9a1261e8ab0917b0ff5b3004508b1842cad0" Jan 29 15:17:49 crc kubenswrapper[4757]: E0129 15:17:49.397639 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:17:49 crc kubenswrapper[4757]: I0129 15:17:49.423837 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"6098b7f5130ded36e34d2b58124793f458af5b996fce28a164fa5b8bbd1a2dbd"} Jan 29 15:17:51 crc kubenswrapper[4757]: E0129 15:17:51.398059 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:17:51 crc kubenswrapper[4757]: E0129 15:17:51.398084 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:17:55 crc kubenswrapper[4757]: E0129 15:17:55.398001 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:17:57 crc kubenswrapper[4757]: E0129 15:17:57.401696 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:18:00 crc kubenswrapper[4757]: I0129 15:18:00.986803 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" podUID="8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" containerName="registry" containerID="cri-o://b4e30bf9b9a7942d165538d84a162105f466bd380f1360c88ecebc6cd5755dbb" gracePeriod=30 Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.354357 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.377447 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-registry-certificates\") pod \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.377485 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2bx5\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-kube-api-access-k2bx5\") pod \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.377503 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-bound-sa-token\") pod \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.377529 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-registry-tls\") pod \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.377565 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-ca-trust-extracted\") pod \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.377785 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.377820 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-installation-pull-secrets\") pod \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.377853 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-trusted-ca\") pod \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\" (UID: \"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13\") " Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.378865 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.379651 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.386568 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.387925 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.388129 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-kube-api-access-k2bx5" (OuterVolumeSpecName: "kube-api-access-k2bx5") pod "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13"). InnerVolumeSpecName "kube-api-access-k2bx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.388884 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.395517 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.398532 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" (UID: "8d6d2b51-0a99-4a7b-b46c-90fbeca34e13"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.478498 4757 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.478537 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2bx5\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-kube-api-access-k2bx5\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.478548 4757 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.478556 4757 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.478566 4757 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.478577 4757 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.478593 4757 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.484563 4757 generic.go:334] "Generic (PLEG): container finished" podID="8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" containerID="b4e30bf9b9a7942d165538d84a162105f466bd380f1360c88ecebc6cd5755dbb" exitCode=0 Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.484604 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" event={"ID":"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13","Type":"ContainerDied","Data":"b4e30bf9b9a7942d165538d84a162105f466bd380f1360c88ecebc6cd5755dbb"} Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.484630 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" event={"ID":"8d6d2b51-0a99-4a7b-b46c-90fbeca34e13","Type":"ContainerDied","Data":"3c006a4d75ae5e8f508176455f4470ba56ab95c81d55cff3975ee1b51fc8cfe6"} Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.484648 4757 scope.go:117] "RemoveContainer" containerID="b4e30bf9b9a7942d165538d84a162105f466bd380f1360c88ecebc6cd5755dbb" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.484748 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kjgkg" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.517014 4757 scope.go:117] "RemoveContainer" containerID="b4e30bf9b9a7942d165538d84a162105f466bd380f1360c88ecebc6cd5755dbb" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.517130 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kjgkg"] Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.520819 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kjgkg"] Jan 29 15:18:01 crc kubenswrapper[4757]: E0129 15:18:01.525374 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e30bf9b9a7942d165538d84a162105f466bd380f1360c88ecebc6cd5755dbb\": container with ID starting with b4e30bf9b9a7942d165538d84a162105f466bd380f1360c88ecebc6cd5755dbb not found: ID does not exist" containerID="b4e30bf9b9a7942d165538d84a162105f466bd380f1360c88ecebc6cd5755dbb" Jan 29 15:18:01 crc kubenswrapper[4757]: I0129 15:18:01.525424 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e30bf9b9a7942d165538d84a162105f466bd380f1360c88ecebc6cd5755dbb"} err="failed to get container status \"b4e30bf9b9a7942d165538d84a162105f466bd380f1360c88ecebc6cd5755dbb\": rpc error: code = NotFound desc = could not find container \"b4e30bf9b9a7942d165538d84a162105f466bd380f1360c88ecebc6cd5755dbb\": container with ID starting with b4e30bf9b9a7942d165538d84a162105f466bd380f1360c88ecebc6cd5755dbb not found: ID does not exist" Jan 29 15:18:02 crc kubenswrapper[4757]: E0129 15:18:02.398330 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:18:03 crc kubenswrapper[4757]: I0129 15:18:03.403592 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" path="/var/lib/kubelet/pods/8d6d2b51-0a99-4a7b-b46c-90fbeca34e13/volumes" Jan 29 15:18:04 crc kubenswrapper[4757]: E0129 15:18:04.398141 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:18:05 crc kubenswrapper[4757]: E0129 15:18:05.398008 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:18:09 crc kubenswrapper[4757]: E0129 15:18:09.398133 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:18:10 crc kubenswrapper[4757]: E0129 15:18:10.398770 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:18:15 crc kubenswrapper[4757]: E0129 15:18:15.398096 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2jc8z" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" Jan 29 15:18:16 crc kubenswrapper[4757]: E0129 15:18:16.397309 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jhlrf" podUID="92724a14-21db-441f-b509-142dc0a8dc15" Jan 29 15:18:17 crc kubenswrapper[4757]: E0129 15:18:17.404397 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-v8v75" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" Jan 29 15:18:20 crc kubenswrapper[4757]: E0129 15:18:20.399059 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-57qth" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" Jan 29 15:18:22 crc kubenswrapper[4757]: E0129 15:18:22.398230 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-99p4m" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.703377 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2jc8z"] Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.715885 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-57qth"] Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.731476 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxw6w"] Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.731792 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" containerName="registry-server" containerID="cri-o://a28daf933c4de609d293716297600aedbfbb676349a7c4b6ab81ff36fbbedb02" gracePeriod=30 Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.745468 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grbn4"] Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.745763 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" podUID="d68b032e-f86c-4928-a676-03c9e49c6722" containerName="marketplace-operator" containerID="cri-o://648b1da2c0ca3898bfaae4861790da0c26c99c96b6fe560352e8cbec0fed5ada" gracePeriod=30 Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.754349 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btp4k"] Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.754600 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-btp4k" podUID="f2342b27-9060-4697-a957-65d07f099e82" containerName="registry-server" containerID="cri-o://0b4f8fe30dd05de38558ad00dc10f8984bb925f4887091e66d4ea69bcfd34352" gracePeriod=30 Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.779490 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhlrf"] Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.804997 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4bj76"] Jan 29 15:18:24 crc kubenswrapper[4757]: E0129 15:18:24.805497 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" containerName="registry-server" Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.805510 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" containerName="registry-server" Jan 29 15:18:24 crc kubenswrapper[4757]: E0129 15:18:24.805531 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" containerName="extract-utilities" Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.805538 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" containerName="extract-utilities" Jan 29 15:18:24 crc kubenswrapper[4757]: E0129 15:18:24.805553 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" containerName="extract-content" Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.805559 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" containerName="extract-content" Jan 29 15:18:24 crc kubenswrapper[4757]: E0129 15:18:24.805575 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" containerName="registry" Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.805581 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" containerName="registry" Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.805769 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6d2b51-0a99-4a7b-b46c-90fbeca34e13" containerName="registry" Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.805787 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e10b6b9-259a-417c-ba5d-311e75543637" containerName="registry-server" Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.807618 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.814198 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-99p4m"] Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.817308 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8v75"] Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.827461 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4bj76"] Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.914961 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c3ae448c-6e33-42e9-bc9b-e909525820fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4bj76\" (UID: \"c3ae448c-6e33-42e9-bc9b-e909525820fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.915245 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p96l7\" (UniqueName: \"kubernetes.io/projected/c3ae448c-6e33-42e9-bc9b-e909525820fb-kube-api-access-p96l7\") pod \"marketplace-operator-79b997595-4bj76\" (UID: \"c3ae448c-6e33-42e9-bc9b-e909525820fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" Jan 29 15:18:24 crc kubenswrapper[4757]: I0129 15:18:24.915300 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ae448c-6e33-42e9-bc9b-e909525820fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4bj76\" (UID: \"c3ae448c-6e33-42e9-bc9b-e909525820fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" Jan 29 15:18:25 crc kubenswrapper[4757]: I0129 15:18:25.015885 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p96l7\" (UniqueName: \"kubernetes.io/projected/c3ae448c-6e33-42e9-bc9b-e909525820fb-kube-api-access-p96l7\") pod \"marketplace-operator-79b997595-4bj76\" (UID: \"c3ae448c-6e33-42e9-bc9b-e909525820fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" Jan 29 15:18:25 crc kubenswrapper[4757]: I0129 15:18:25.015963 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ae448c-6e33-42e9-bc9b-e909525820fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4bj76\" (UID: \"c3ae448c-6e33-42e9-bc9b-e909525820fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" Jan 29 15:18:25 crc kubenswrapper[4757]: I0129 15:18:25.021411 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ae448c-6e33-42e9-bc9b-e909525820fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4bj76\" (UID: \"c3ae448c-6e33-42e9-bc9b-e909525820fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" Jan 29 15:18:25 crc kubenswrapper[4757]: I0129 15:18:25.022295 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c3ae448c-6e33-42e9-bc9b-e909525820fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4bj76\" (UID: \"c3ae448c-6e33-42e9-bc9b-e909525820fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" Jan 29 15:18:25 crc kubenswrapper[4757]: I0129 15:18:25.016005 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c3ae448c-6e33-42e9-bc9b-e909525820fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4bj76\" (UID: \"c3ae448c-6e33-42e9-bc9b-e909525820fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" Jan 29 15:18:25 crc kubenswrapper[4757]: I0129 15:18:25.031204 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p96l7\" (UniqueName: \"kubernetes.io/projected/c3ae448c-6e33-42e9-bc9b-e909525820fb-kube-api-access-p96l7\") pod \"marketplace-operator-79b997595-4bj76\" (UID: \"c3ae448c-6e33-42e9-bc9b-e909525820fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" Jan 29 15:18:25 crc kubenswrapper[4757]: I0129 15:18:25.146556 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.219392 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57qth" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.225900 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jhlrf" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.233108 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jc8z" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.323592 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99p4m" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.327592 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43de85f7-11df-4e6f-8d3f-b982b03ce802-catalog-content\") pod \"43de85f7-11df-4e6f-8d3f-b982b03ce802\" (UID: \"43de85f7-11df-4e6f-8d3f-b982b03ce802\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.327633 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg5b9\" (UniqueName: \"kubernetes.io/projected/d4596539-1be7-44ac-8e25-3fd37c823166-kube-api-access-bg5b9\") pod \"d4596539-1be7-44ac-8e25-3fd37c823166\" (UID: \"d4596539-1be7-44ac-8e25-3fd37c823166\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.327661 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgwj9\" (UniqueName: \"kubernetes.io/projected/92724a14-21db-441f-b509-142dc0a8dc15-kube-api-access-xgwj9\") pod \"92724a14-21db-441f-b509-142dc0a8dc15\" (UID: \"92724a14-21db-441f-b509-142dc0a8dc15\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.327681 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm2n7\" (UniqueName: \"kubernetes.io/projected/43de85f7-11df-4e6f-8d3f-b982b03ce802-kube-api-access-rm2n7\") pod \"43de85f7-11df-4e6f-8d3f-b982b03ce802\" (UID: \"43de85f7-11df-4e6f-8d3f-b982b03ce802\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.327710 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92724a14-21db-441f-b509-142dc0a8dc15-utilities\") pod \"92724a14-21db-441f-b509-142dc0a8dc15\" (UID: \"92724a14-21db-441f-b509-142dc0a8dc15\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.327777 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92724a14-21db-441f-b509-142dc0a8dc15-catalog-content\") pod \"92724a14-21db-441f-b509-142dc0a8dc15\" (UID: \"92724a14-21db-441f-b509-142dc0a8dc15\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.327799 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4596539-1be7-44ac-8e25-3fd37c823166-utilities\") pod \"d4596539-1be7-44ac-8e25-3fd37c823166\" (UID: \"d4596539-1be7-44ac-8e25-3fd37c823166\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.327813 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4596539-1be7-44ac-8e25-3fd37c823166-catalog-content\") pod \"d4596539-1be7-44ac-8e25-3fd37c823166\" (UID: \"d4596539-1be7-44ac-8e25-3fd37c823166\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.327826 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43de85f7-11df-4e6f-8d3f-b982b03ce802-utilities\") pod \"43de85f7-11df-4e6f-8d3f-b982b03ce802\" (UID: \"43de85f7-11df-4e6f-8d3f-b982b03ce802\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.329826 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92724a14-21db-441f-b509-142dc0a8dc15-utilities" (OuterVolumeSpecName: "utilities") pod "92724a14-21db-441f-b509-142dc0a8dc15" (UID: "92724a14-21db-441f-b509-142dc0a8dc15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.330025 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43de85f7-11df-4e6f-8d3f-b982b03ce802-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43de85f7-11df-4e6f-8d3f-b982b03ce802" (UID: "43de85f7-11df-4e6f-8d3f-b982b03ce802"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.330123 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4596539-1be7-44ac-8e25-3fd37c823166-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4596539-1be7-44ac-8e25-3fd37c823166" (UID: "d4596539-1be7-44ac-8e25-3fd37c823166"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.330200 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43de85f7-11df-4e6f-8d3f-b982b03ce802-utilities" (OuterVolumeSpecName: "utilities") pod "43de85f7-11df-4e6f-8d3f-b982b03ce802" (UID: "43de85f7-11df-4e6f-8d3f-b982b03ce802"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.330348 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4596539-1be7-44ac-8e25-3fd37c823166-utilities" (OuterVolumeSpecName: "utilities") pod "d4596539-1be7-44ac-8e25-3fd37c823166" (UID: "d4596539-1be7-44ac-8e25-3fd37c823166"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.332619 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92724a14-21db-441f-b509-142dc0a8dc15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92724a14-21db-441f-b509-142dc0a8dc15" (UID: "92724a14-21db-441f-b509-142dc0a8dc15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.333295 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43de85f7-11df-4e6f-8d3f-b982b03ce802-kube-api-access-rm2n7" (OuterVolumeSpecName: "kube-api-access-rm2n7") pod "43de85f7-11df-4e6f-8d3f-b982b03ce802" (UID: "43de85f7-11df-4e6f-8d3f-b982b03ce802"). InnerVolumeSpecName "kube-api-access-rm2n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.333356 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92724a14-21db-441f-b509-142dc0a8dc15-kube-api-access-xgwj9" (OuterVolumeSpecName: "kube-api-access-xgwj9") pod "92724a14-21db-441f-b509-142dc0a8dc15" (UID: "92724a14-21db-441f-b509-142dc0a8dc15"). InnerVolumeSpecName "kube-api-access-xgwj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.333653 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4596539-1be7-44ac-8e25-3fd37c823166-kube-api-access-bg5b9" (OuterVolumeSpecName: "kube-api-access-bg5b9") pod "d4596539-1be7-44ac-8e25-3fd37c823166" (UID: "d4596539-1be7-44ac-8e25-3fd37c823166"). InnerVolumeSpecName "kube-api-access-bg5b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.337871 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8v75" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.429225 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swndd\" (UniqueName: \"kubernetes.io/projected/bce413ab-1d96-4e66-b700-db27f6b52966-kube-api-access-swndd\") pod \"bce413ab-1d96-4e66-b700-db27f6b52966\" (UID: \"bce413ab-1d96-4e66-b700-db27f6b52966\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.429293 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr9fs\" (UniqueName: \"kubernetes.io/projected/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-kube-api-access-rr9fs\") pod \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\" (UID: \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.429343 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce413ab-1d96-4e66-b700-db27f6b52966-catalog-content\") pod \"bce413ab-1d96-4e66-b700-db27f6b52966\" (UID: \"bce413ab-1d96-4e66-b700-db27f6b52966\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.429376 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce413ab-1d96-4e66-b700-db27f6b52966-utilities\") pod \"bce413ab-1d96-4e66-b700-db27f6b52966\" (UID: \"bce413ab-1d96-4e66-b700-db27f6b52966\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.429414 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-utilities\") pod \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\" (UID: \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.429472 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-catalog-content\") pod \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\" (UID: \"6f40510d-f93a-4a84-ad4a-e503fa0bdf09\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.429697 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92724a14-21db-441f-b509-142dc0a8dc15-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.429713 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4596539-1be7-44ac-8e25-3fd37c823166-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.430049 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f40510d-f93a-4a84-ad4a-e503fa0bdf09" (UID: "6f40510d-f93a-4a84-ad4a-e503fa0bdf09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.430082 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce413ab-1d96-4e66-b700-db27f6b52966-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bce413ab-1d96-4e66-b700-db27f6b52966" (UID: "bce413ab-1d96-4e66-b700-db27f6b52966"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.430112 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4596539-1be7-44ac-8e25-3fd37c823166-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.430130 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43de85f7-11df-4e6f-8d3f-b982b03ce802-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.430152 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43de85f7-11df-4e6f-8d3f-b982b03ce802-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.430164 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg5b9\" (UniqueName: \"kubernetes.io/projected/d4596539-1be7-44ac-8e25-3fd37c823166-kube-api-access-bg5b9\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.430177 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgwj9\" (UniqueName: \"kubernetes.io/projected/92724a14-21db-441f-b509-142dc0a8dc15-kube-api-access-xgwj9\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.430189 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm2n7\" (UniqueName: \"kubernetes.io/projected/43de85f7-11df-4e6f-8d3f-b982b03ce802-kube-api-access-rm2n7\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.430218 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92724a14-21db-441f-b509-142dc0a8dc15-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.431475 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-utilities" (OuterVolumeSpecName: "utilities") pod "6f40510d-f93a-4a84-ad4a-e503fa0bdf09" (UID: "6f40510d-f93a-4a84-ad4a-e503fa0bdf09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.431938 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce413ab-1d96-4e66-b700-db27f6b52966-utilities" (OuterVolumeSpecName: "utilities") pod "bce413ab-1d96-4e66-b700-db27f6b52966" (UID: "bce413ab-1d96-4e66-b700-db27f6b52966"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.433697 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-kube-api-access-rr9fs" (OuterVolumeSpecName: "kube-api-access-rr9fs") pod "6f40510d-f93a-4a84-ad4a-e503fa0bdf09" (UID: "6f40510d-f93a-4a84-ad4a-e503fa0bdf09"). InnerVolumeSpecName "kube-api-access-rr9fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.433924 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce413ab-1d96-4e66-b700-db27f6b52966-kube-api-access-swndd" (OuterVolumeSpecName: "kube-api-access-swndd") pod "bce413ab-1d96-4e66-b700-db27f6b52966" (UID: "bce413ab-1d96-4e66-b700-db27f6b52966"). InnerVolumeSpecName "kube-api-access-swndd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.531073 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce413ab-1d96-4e66-b700-db27f6b52966-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.531109 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce413ab-1d96-4e66-b700-db27f6b52966-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.531123 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.531137 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.531152 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swndd\" (UniqueName: \"kubernetes.io/projected/bce413ab-1d96-4e66-b700-db27f6b52966-kube-api-access-swndd\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.531164 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr9fs\" (UniqueName: \"kubernetes.io/projected/6f40510d-f93a-4a84-ad4a-e503fa0bdf09-kube-api-access-rr9fs\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.616438 4757 generic.go:334] "Generic (PLEG): container finished" podID="fd7070d7-3870-49f1-8976-094ad97b6efc" containerID="a28daf933c4de609d293716297600aedbfbb676349a7c4b6ab81ff36fbbedb02" exitCode=0 Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.616499 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxw6w" event={"ID":"fd7070d7-3870-49f1-8976-094ad97b6efc","Type":"ContainerDied","Data":"a28daf933c4de609d293716297600aedbfbb676349a7c4b6ab81ff36fbbedb02"} Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.618690 4757 generic.go:334] "Generic (PLEG): container finished" podID="f2342b27-9060-4697-a957-65d07f099e82" containerID="0b4f8fe30dd05de38558ad00dc10f8984bb925f4887091e66d4ea69bcfd34352" exitCode=0 Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.618760 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btp4k" event={"ID":"f2342b27-9060-4697-a957-65d07f099e82","Type":"ContainerDied","Data":"0b4f8fe30dd05de38558ad00dc10f8984bb925f4887091e66d4ea69bcfd34352"} Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.620659 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57qth" event={"ID":"d4596539-1be7-44ac-8e25-3fd37c823166","Type":"ContainerDied","Data":"545b652b71fadc301ba075abd413458ac6e02b209f5c18d95f991e4f37186346"} Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.620692 4757 scope.go:117] "RemoveContainer" containerID="3e805b09d3de9c949b272e067a10b865f0b9768207ea43831a603c192f2abb2f" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.620728 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57qth" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.622722 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99p4m" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.622719 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99p4m" event={"ID":"6f40510d-f93a-4a84-ad4a-e503fa0bdf09","Type":"ContainerDied","Data":"48a18fbeb46be4236a22b36be0a73430c2b22ad985a28bbb6052b517677c98eb"} Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.632909 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8v75" event={"ID":"bce413ab-1d96-4e66-b700-db27f6b52966","Type":"ContainerDied","Data":"5f86ecd2623087577a1b8efa95f81ee47eab70a0be84f35e4665c3221ee72f28"} Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.633471 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8v75" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.643947 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2jc8z" event={"ID":"43de85f7-11df-4e6f-8d3f-b982b03ce802","Type":"ContainerDied","Data":"97f906717ccc198996afc42c311011162821b18e40086373a8ba66c14501406f"} Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.644003 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2jc8z" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.647703 4757 scope.go:117] "RemoveContainer" containerID="fd2fc4641f6c3054a6b7505ab31e538096b06ac9dc4fb098aac3b7db7eb3a088" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.658398 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jhlrf" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.658394 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jhlrf" event={"ID":"92724a14-21db-441f-b509-142dc0a8dc15","Type":"ContainerDied","Data":"3e26fb4ca8785c9e78aec3ebfa31ae396e19431c9b52a2d84822aac52d255153"} Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.667714 4757 generic.go:334] "Generic (PLEG): container finished" podID="d68b032e-f86c-4928-a676-03c9e49c6722" containerID="648b1da2c0ca3898bfaae4861790da0c26c99c96b6fe560352e8cbec0fed5ada" exitCode=0 Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.667754 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" event={"ID":"d68b032e-f86c-4928-a676-03c9e49c6722","Type":"ContainerDied","Data":"648b1da2c0ca3898bfaae4861790da0c26c99c96b6fe560352e8cbec0fed5ada"} Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.677399 4757 scope.go:117] "RemoveContainer" containerID="c0b90e7ea5d158a9744e68e1cf966de7415e79fa91d0f42bc8fbb5161e0bf23f" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.701484 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-57qth"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.706241 4757 scope.go:117] "RemoveContainer" containerID="8d6494d78f9cab25462f6121d0f17feaa1af864e8d14a5012e34844ec8237c36" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.722592 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-57qth"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.731168 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-99p4m"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.734832 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-99p4m"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.735072 4757 scope.go:117] "RemoveContainer" containerID="6c94b05528981f308a94e9f9d0cabd7e7d973e273b04ee0a2602a75af66511da" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.747078 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2jc8z"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.751768 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2jc8z"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.766596 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhlrf"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.772511 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jhlrf"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.792459 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8v75"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:25.795035 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v8v75"] Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.259164 4757 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a28daf933c4de609d293716297600aedbfbb676349a7c4b6ab81ff36fbbedb02 is running failed: container process not found" containerID="a28daf933c4de609d293716297600aedbfbb676349a7c4b6ab81ff36fbbedb02" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.259501 4757 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a28daf933c4de609d293716297600aedbfbb676349a7c4b6ab81ff36fbbedb02 is running failed: container process not found" containerID="a28daf933c4de609d293716297600aedbfbb676349a7c4b6ab81ff36fbbedb02" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.259871 4757 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a28daf933c4de609d293716297600aedbfbb676349a7c4b6ab81ff36fbbedb02 is running failed: container process not found" containerID="a28daf933c4de609d293716297600aedbfbb676349a7c4b6ab81ff36fbbedb02" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.259936 4757 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a28daf933c4de609d293716297600aedbfbb676349a7c4b6ab81ff36fbbedb02 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-pxw6w" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" containerName="registry-server" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.483652 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.488497 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.496761 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.518570 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2cq2s"] Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.518836 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.518849 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.519112 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2342b27-9060-4697-a957-65d07f099e82" containerName="extract-content" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519120 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2342b27-9060-4697-a957-65d07f099e82" containerName="extract-content" Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.519129 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" containerName="extract-content" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519136 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" containerName="extract-content" Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.519145 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" containerName="registry-server" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519151 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" containerName="registry-server" Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.519159 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92724a14-21db-441f-b509-142dc0a8dc15" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519165 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="92724a14-21db-441f-b509-142dc0a8dc15" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.519174 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519179 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.519192 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519198 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.519209 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68b032e-f86c-4928-a676-03c9e49c6722" containerName="marketplace-operator" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519217 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68b032e-f86c-4928-a676-03c9e49c6722" containerName="marketplace-operator" Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.519226 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2342b27-9060-4697-a957-65d07f099e82" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519231 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2342b27-9060-4697-a957-65d07f099e82" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.519240 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519246 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.519254 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2342b27-9060-4697-a957-65d07f099e82" containerName="registry-server" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519260 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2342b27-9060-4697-a957-65d07f099e82" containerName="registry-server" Jan 29 15:18:26 crc kubenswrapper[4757]: E0129 15:18:26.519284 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519291 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519375 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" containerName="registry-server" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519386 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="d68b032e-f86c-4928-a676-03c9e49c6722" containerName="marketplace-operator" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519396 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519403 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="92724a14-21db-441f-b509-142dc0a8dc15" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519412 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519421 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519428 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2342b27-9060-4697-a957-65d07f099e82" containerName="registry-server" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.519435 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" containerName="extract-utilities" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.520133 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.526251 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.552601 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2342b27-9060-4697-a957-65d07f099e82-catalog-content\") pod \"f2342b27-9060-4697-a957-65d07f099e82\" (UID: \"f2342b27-9060-4697-a957-65d07f099e82\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.552684 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd7070d7-3870-49f1-8976-094ad97b6efc-utilities\") pod \"fd7070d7-3870-49f1-8976-094ad97b6efc\" (UID: \"fd7070d7-3870-49f1-8976-094ad97b6efc\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.552707 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd7070d7-3870-49f1-8976-094ad97b6efc-catalog-content\") pod \"fd7070d7-3870-49f1-8976-094ad97b6efc\" (UID: \"fd7070d7-3870-49f1-8976-094ad97b6efc\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.552731 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvvcl\" (UniqueName: \"kubernetes.io/projected/f2342b27-9060-4697-a957-65d07f099e82-kube-api-access-tvvcl\") pod \"f2342b27-9060-4697-a957-65d07f099e82\" (UID: \"f2342b27-9060-4697-a957-65d07f099e82\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.552750 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlr7r\" (UniqueName: \"kubernetes.io/projected/d68b032e-f86c-4928-a676-03c9e49c6722-kube-api-access-nlr7r\") pod \"d68b032e-f86c-4928-a676-03c9e49c6722\" (UID: \"d68b032e-f86c-4928-a676-03c9e49c6722\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.552770 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjf2f\" (UniqueName: \"kubernetes.io/projected/fd7070d7-3870-49f1-8976-094ad97b6efc-kube-api-access-bjf2f\") pod \"fd7070d7-3870-49f1-8976-094ad97b6efc\" (UID: \"fd7070d7-3870-49f1-8976-094ad97b6efc\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.552810 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d68b032e-f86c-4928-a676-03c9e49c6722-marketplace-trusted-ca\") pod \"d68b032e-f86c-4928-a676-03c9e49c6722\" (UID: \"d68b032e-f86c-4928-a676-03c9e49c6722\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.552843 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d68b032e-f86c-4928-a676-03c9e49c6722-marketplace-operator-metrics\") pod \"d68b032e-f86c-4928-a676-03c9e49c6722\" (UID: \"d68b032e-f86c-4928-a676-03c9e49c6722\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.552859 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2342b27-9060-4697-a957-65d07f099e82-utilities\") pod \"f2342b27-9060-4697-a957-65d07f099e82\" (UID: \"f2342b27-9060-4697-a957-65d07f099e82\") " Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.552984 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7hhw\" (UniqueName: \"kubernetes.io/projected/865d1515-2b66-4b6e-b670-d01e37c88cac-kube-api-access-w7hhw\") pod \"certified-operators-2cq2s\" (UID: \"865d1515-2b66-4b6e-b670-d01e37c88cac\") " pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.553015 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/865d1515-2b66-4b6e-b670-d01e37c88cac-utilities\") pod \"certified-operators-2cq2s\" (UID: \"865d1515-2b66-4b6e-b670-d01e37c88cac\") " pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.553051 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/865d1515-2b66-4b6e-b670-d01e37c88cac-catalog-content\") pod \"certified-operators-2cq2s\" (UID: \"865d1515-2b66-4b6e-b670-d01e37c88cac\") " pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.553191 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2cq2s"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.554681 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d68b032e-f86c-4928-a676-03c9e49c6722-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d68b032e-f86c-4928-a676-03c9e49c6722" (UID: "d68b032e-f86c-4928-a676-03c9e49c6722"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.556959 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2342b27-9060-4697-a957-65d07f099e82-utilities" (OuterVolumeSpecName: "utilities") pod "f2342b27-9060-4697-a957-65d07f099e82" (UID: "f2342b27-9060-4697-a957-65d07f099e82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.558977 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d68b032e-f86c-4928-a676-03c9e49c6722-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d68b032e-f86c-4928-a676-03c9e49c6722" (UID: "d68b032e-f86c-4928-a676-03c9e49c6722"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.559939 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd7070d7-3870-49f1-8976-094ad97b6efc-utilities" (OuterVolumeSpecName: "utilities") pod "fd7070d7-3870-49f1-8976-094ad97b6efc" (UID: "fd7070d7-3870-49f1-8976-094ad97b6efc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.572100 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2342b27-9060-4697-a957-65d07f099e82-kube-api-access-tvvcl" (OuterVolumeSpecName: "kube-api-access-tvvcl") pod "f2342b27-9060-4697-a957-65d07f099e82" (UID: "f2342b27-9060-4697-a957-65d07f099e82"). InnerVolumeSpecName "kube-api-access-tvvcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.575441 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d68b032e-f86c-4928-a676-03c9e49c6722-kube-api-access-nlr7r" (OuterVolumeSpecName: "kube-api-access-nlr7r") pod "d68b032e-f86c-4928-a676-03c9e49c6722" (UID: "d68b032e-f86c-4928-a676-03c9e49c6722"). InnerVolumeSpecName "kube-api-access-nlr7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.578446 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7070d7-3870-49f1-8976-094ad97b6efc-kube-api-access-bjf2f" (OuterVolumeSpecName: "kube-api-access-bjf2f") pod "fd7070d7-3870-49f1-8976-094ad97b6efc" (UID: "fd7070d7-3870-49f1-8976-094ad97b6efc"). InnerVolumeSpecName "kube-api-access-bjf2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.601370 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4bj76"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.601621 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2342b27-9060-4697-a957-65d07f099e82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2342b27-9060-4697-a957-65d07f099e82" (UID: "f2342b27-9060-4697-a957-65d07f099e82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: W0129 15:18:26.616451 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3ae448c_6e33_42e9_bc9b_e909525820fb.slice/crio-92d9360f01d579a2427540d4a19d3da149d6a4a32667204e948ff0ca2cf932f8 WatchSource:0}: Error finding container 92d9360f01d579a2427540d4a19d3da149d6a4a32667204e948ff0ca2cf932f8: Status 404 returned error can't find the container with id 92d9360f01d579a2427540d4a19d3da149d6a4a32667204e948ff0ca2cf932f8 Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.642997 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd7070d7-3870-49f1-8976-094ad97b6efc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd7070d7-3870-49f1-8976-094ad97b6efc" (UID: "fd7070d7-3870-49f1-8976-094ad97b6efc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.654169 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/865d1515-2b66-4b6e-b670-d01e37c88cac-catalog-content\") pod \"certified-operators-2cq2s\" (UID: \"865d1515-2b66-4b6e-b670-d01e37c88cac\") " pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.654446 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7hhw\" (UniqueName: \"kubernetes.io/projected/865d1515-2b66-4b6e-b670-d01e37c88cac-kube-api-access-w7hhw\") pod \"certified-operators-2cq2s\" (UID: \"865d1515-2b66-4b6e-b670-d01e37c88cac\") " pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.654570 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/865d1515-2b66-4b6e-b670-d01e37c88cac-utilities\") pod \"certified-operators-2cq2s\" (UID: \"865d1515-2b66-4b6e-b670-d01e37c88cac\") " pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.654691 4757 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d68b032e-f86c-4928-a676-03c9e49c6722-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.654770 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2342b27-9060-4697-a957-65d07f099e82-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.655370 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2342b27-9060-4697-a957-65d07f099e82-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.655392 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd7070d7-3870-49f1-8976-094ad97b6efc-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.655402 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd7070d7-3870-49f1-8976-094ad97b6efc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.655413 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvvcl\" (UniqueName: \"kubernetes.io/projected/f2342b27-9060-4697-a957-65d07f099e82-kube-api-access-tvvcl\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.655422 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlr7r\" (UniqueName: \"kubernetes.io/projected/d68b032e-f86c-4928-a676-03c9e49c6722-kube-api-access-nlr7r\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.655430 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjf2f\" (UniqueName: \"kubernetes.io/projected/fd7070d7-3870-49f1-8976-094ad97b6efc-kube-api-access-bjf2f\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.655439 4757 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d68b032e-f86c-4928-a676-03c9e49c6722-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.655034 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/865d1515-2b66-4b6e-b670-d01e37c88cac-utilities\") pod \"certified-operators-2cq2s\" (UID: \"865d1515-2b66-4b6e-b670-d01e37c88cac\") " pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.654858 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/865d1515-2b66-4b6e-b670-d01e37c88cac-catalog-content\") pod \"certified-operators-2cq2s\" (UID: \"865d1515-2b66-4b6e-b670-d01e37c88cac\") " pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.676728 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7hhw\" (UniqueName: \"kubernetes.io/projected/865d1515-2b66-4b6e-b670-d01e37c88cac-kube-api-access-w7hhw\") pod \"certified-operators-2cq2s\" (UID: \"865d1515-2b66-4b6e-b670-d01e37c88cac\") " pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.679599 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" event={"ID":"c3ae448c-6e33-42e9-bc9b-e909525820fb","Type":"ContainerStarted","Data":"92d9360f01d579a2427540d4a19d3da149d6a4a32667204e948ff0ca2cf932f8"} Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.681743 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-btp4k" event={"ID":"f2342b27-9060-4697-a957-65d07f099e82","Type":"ContainerDied","Data":"6af8def668221c99e21640b19c2ee6a6757b80a824e97d432b5d0e68881578fe"} Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.681799 4757 scope.go:117] "RemoveContainer" containerID="0b4f8fe30dd05de38558ad00dc10f8984bb925f4887091e66d4ea69bcfd34352" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.681797 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-btp4k" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.689897 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" event={"ID":"d68b032e-f86c-4928-a676-03c9e49c6722","Type":"ContainerDied","Data":"4f7ef3e6aea70420e15440acca913ff3a0e396beb5db2944024a0da5062546df"} Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.689931 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-grbn4" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.694037 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxw6w" event={"ID":"fd7070d7-3870-49f1-8976-094ad97b6efc","Type":"ContainerDied","Data":"6cb827fcca3a5433ae5e995c9d5cdd2fe816c9d942530449f1665b3241ccdd17"} Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.694070 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxw6w" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.704214 4757 scope.go:117] "RemoveContainer" containerID="5e558c3be8fc50586498e6ec6235e7eabc24b581c5ea34900c084161676b8434" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.718150 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-btp4k"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.725971 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-btp4k"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.730573 4757 scope.go:117] "RemoveContainer" containerID="f45aef4d53e4d1f03def42bdc1c7a05993e482bf150c45599984f1f9238829bc" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.731532 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grbn4"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.739640 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-grbn4"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.747672 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxw6w"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.747721 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pxw6w"] Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.751309 4757 scope.go:117] "RemoveContainer" containerID="648b1da2c0ca3898bfaae4861790da0c26c99c96b6fe560352e8cbec0fed5ada" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.772649 4757 scope.go:117] "RemoveContainer" containerID="a28daf933c4de609d293716297600aedbfbb676349a7c4b6ab81ff36fbbedb02" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.785740 4757 scope.go:117] "RemoveContainer" containerID="ce9e92422cd8f6e38a33f4e859c2d536c33a4d4a6e19e8202ec753ada698e94d" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.800723 4757 scope.go:117] "RemoveContainer" containerID="3353db46eb4906dd27361821b7dced7ea3843529d6a0d93475705822a970588e" Jan 29 15:18:26 crc kubenswrapper[4757]: I0129 15:18:26.837713 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.226473 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2cq2s"] Jan 29 15:18:27 crc kubenswrapper[4757]: W0129 15:18:27.231352 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod865d1515_2b66_4b6e_b670_d01e37c88cac.slice/crio-385c215096dcbbf697aeb0e23d67099a946a09af1b78987e117a846b56559a39 WatchSource:0}: Error finding container 385c215096dcbbf697aeb0e23d67099a946a09af1b78987e117a846b56559a39: Status 404 returned error can't find the container with id 385c215096dcbbf697aeb0e23d67099a946a09af1b78987e117a846b56559a39 Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.402407 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43de85f7-11df-4e6f-8d3f-b982b03ce802" path="/var/lib/kubelet/pods/43de85f7-11df-4e6f-8d3f-b982b03ce802/volumes" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.403056 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f40510d-f93a-4a84-ad4a-e503fa0bdf09" path="/var/lib/kubelet/pods/6f40510d-f93a-4a84-ad4a-e503fa0bdf09/volumes" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.403504 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92724a14-21db-441f-b509-142dc0a8dc15" path="/var/lib/kubelet/pods/92724a14-21db-441f-b509-142dc0a8dc15/volumes" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.403926 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bce413ab-1d96-4e66-b700-db27f6b52966" path="/var/lib/kubelet/pods/bce413ab-1d96-4e66-b700-db27f6b52966/volumes" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.404931 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4596539-1be7-44ac-8e25-3fd37c823166" path="/var/lib/kubelet/pods/d4596539-1be7-44ac-8e25-3fd37c823166/volumes" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.405423 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d68b032e-f86c-4928-a676-03c9e49c6722" path="/var/lib/kubelet/pods/d68b032e-f86c-4928-a676-03c9e49c6722/volumes" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.405846 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2342b27-9060-4697-a957-65d07f099e82" path="/var/lib/kubelet/pods/f2342b27-9060-4697-a957-65d07f099e82/volumes" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.406800 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd7070d7-3870-49f1-8976-094ad97b6efc" path="/var/lib/kubelet/pods/fd7070d7-3870-49f1-8976-094ad97b6efc/volumes" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.522112 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7cmzn"] Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.524018 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.526570 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.534487 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cmzn"] Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.566234 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2xrf\" (UniqueName: \"kubernetes.io/projected/12bc2599-1396-4296-b78a-d37850977495-kube-api-access-t2xrf\") pod \"redhat-marketplace-7cmzn\" (UID: \"12bc2599-1396-4296-b78a-d37850977495\") " pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.566333 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12bc2599-1396-4296-b78a-d37850977495-catalog-content\") pod \"redhat-marketplace-7cmzn\" (UID: \"12bc2599-1396-4296-b78a-d37850977495\") " pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.566351 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12bc2599-1396-4296-b78a-d37850977495-utilities\") pod \"redhat-marketplace-7cmzn\" (UID: \"12bc2599-1396-4296-b78a-d37850977495\") " pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.666747 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12bc2599-1396-4296-b78a-d37850977495-catalog-content\") pod \"redhat-marketplace-7cmzn\" (UID: \"12bc2599-1396-4296-b78a-d37850977495\") " pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.666787 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12bc2599-1396-4296-b78a-d37850977495-utilities\") pod \"redhat-marketplace-7cmzn\" (UID: \"12bc2599-1396-4296-b78a-d37850977495\") " pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.666822 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2xrf\" (UniqueName: \"kubernetes.io/projected/12bc2599-1396-4296-b78a-d37850977495-kube-api-access-t2xrf\") pod \"redhat-marketplace-7cmzn\" (UID: \"12bc2599-1396-4296-b78a-d37850977495\") " pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.667489 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12bc2599-1396-4296-b78a-d37850977495-catalog-content\") pod \"redhat-marketplace-7cmzn\" (UID: \"12bc2599-1396-4296-b78a-d37850977495\") " pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.667693 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12bc2599-1396-4296-b78a-d37850977495-utilities\") pod \"redhat-marketplace-7cmzn\" (UID: \"12bc2599-1396-4296-b78a-d37850977495\") " pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.688896 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2xrf\" (UniqueName: \"kubernetes.io/projected/12bc2599-1396-4296-b78a-d37850977495-kube-api-access-t2xrf\") pod \"redhat-marketplace-7cmzn\" (UID: \"12bc2599-1396-4296-b78a-d37850977495\") " pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.700865 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" event={"ID":"c3ae448c-6e33-42e9-bc9b-e909525820fb","Type":"ContainerStarted","Data":"bc930da661314b499f585c2e5f847028fb86fafb6153a8bd777df57023e24beb"} Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.701123 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.704851 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.705406 4757 generic.go:334] "Generic (PLEG): container finished" podID="865d1515-2b66-4b6e-b670-d01e37c88cac" containerID="19dcdcd53bafcdaf93e3522701a38e788e0e130419c2c8782728b6da309b36b1" exitCode=0 Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.705486 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2cq2s" event={"ID":"865d1515-2b66-4b6e-b670-d01e37c88cac","Type":"ContainerDied","Data":"19dcdcd53bafcdaf93e3522701a38e788e0e130419c2c8782728b6da309b36b1"} Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.705516 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2cq2s" event={"ID":"865d1515-2b66-4b6e-b670-d01e37c88cac","Type":"ContainerStarted","Data":"385c215096dcbbf697aeb0e23d67099a946a09af1b78987e117a846b56559a39"} Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.707293 4757 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.744939 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4bj76" podStartSLOduration=3.744914498 podStartE2EDuration="3.744914498s" podCreationTimestamp="2026-01-29 15:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:18:27.725072085 +0000 UTC m=+471.014322322" watchObservedRunningTime="2026-01-29 15:18:27.744914498 +0000 UTC m=+471.034164735" Jan 29 15:18:27 crc kubenswrapper[4757]: I0129 15:18:27.843973 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:28 crc kubenswrapper[4757]: I0129 15:18:28.240756 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cmzn"] Jan 29 15:18:28 crc kubenswrapper[4757]: W0129 15:18:28.248915 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12bc2599_1396_4296_b78a_d37850977495.slice/crio-92403c985ab6e64b96c990967fbe89788c38faaf71a53f99a4fce0d6f6fb0cd4 WatchSource:0}: Error finding container 92403c985ab6e64b96c990967fbe89788c38faaf71a53f99a4fce0d6f6fb0cd4: Status 404 returned error can't find the container with id 92403c985ab6e64b96c990967fbe89788c38faaf71a53f99a4fce0d6f6fb0cd4 Jan 29 15:18:28 crc kubenswrapper[4757]: I0129 15:18:28.713541 4757 generic.go:334] "Generic (PLEG): container finished" podID="12bc2599-1396-4296-b78a-d37850977495" containerID="c78dff094d078682a720f444658596cd4e33fbe09e2d6ca3caef58bc2c8b79a1" exitCode=0 Jan 29 15:18:28 crc kubenswrapper[4757]: I0129 15:18:28.713609 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cmzn" event={"ID":"12bc2599-1396-4296-b78a-d37850977495","Type":"ContainerDied","Data":"c78dff094d078682a720f444658596cd4e33fbe09e2d6ca3caef58bc2c8b79a1"} Jan 29 15:18:28 crc kubenswrapper[4757]: I0129 15:18:28.713935 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cmzn" event={"ID":"12bc2599-1396-4296-b78a-d37850977495","Type":"ContainerStarted","Data":"92403c985ab6e64b96c990967fbe89788c38faaf71a53f99a4fce0d6f6fb0cd4"} Jan 29 15:18:28 crc kubenswrapper[4757]: I0129 15:18:28.716259 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2cq2s" event={"ID":"865d1515-2b66-4b6e-b670-d01e37c88cac","Type":"ContainerStarted","Data":"c732ed8b57a576c5a6ce54fa91f81c9124266191265a760622b883bd4f46e81d"} Jan 29 15:18:28 crc kubenswrapper[4757]: I0129 15:18:28.920217 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6prjx"] Jan 29 15:18:28 crc kubenswrapper[4757]: I0129 15:18:28.921617 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:28 crc kubenswrapper[4757]: I0129 15:18:28.928694 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:18:28 crc kubenswrapper[4757]: I0129 15:18:28.932652 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6prjx"] Jan 29 15:18:28 crc kubenswrapper[4757]: I0129 15:18:28.982058 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b41b6b-3fb0-4a49-9ca5-d220053e2aa3-catalog-content\") pod \"redhat-operators-6prjx\" (UID: \"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3\") " pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:28 crc kubenswrapper[4757]: I0129 15:18:28.982109 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpmlv\" (UniqueName: \"kubernetes.io/projected/96b41b6b-3fb0-4a49-9ca5-d220053e2aa3-kube-api-access-gpmlv\") pod \"redhat-operators-6prjx\" (UID: \"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3\") " pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:28 crc kubenswrapper[4757]: I0129 15:18:28.982224 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b41b6b-3fb0-4a49-9ca5-d220053e2aa3-utilities\") pod \"redhat-operators-6prjx\" (UID: \"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3\") " pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.083244 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b41b6b-3fb0-4a49-9ca5-d220053e2aa3-catalog-content\") pod \"redhat-operators-6prjx\" (UID: \"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3\") " pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.083313 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpmlv\" (UniqueName: \"kubernetes.io/projected/96b41b6b-3fb0-4a49-9ca5-d220053e2aa3-kube-api-access-gpmlv\") pod \"redhat-operators-6prjx\" (UID: \"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3\") " pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.083371 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b41b6b-3fb0-4a49-9ca5-d220053e2aa3-utilities\") pod \"redhat-operators-6prjx\" (UID: \"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3\") " pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.083967 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96b41b6b-3fb0-4a49-9ca5-d220053e2aa3-utilities\") pod \"redhat-operators-6prjx\" (UID: \"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3\") " pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.084743 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96b41b6b-3fb0-4a49-9ca5-d220053e2aa3-catalog-content\") pod \"redhat-operators-6prjx\" (UID: \"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3\") " pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.103020 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpmlv\" (UniqueName: \"kubernetes.io/projected/96b41b6b-3fb0-4a49-9ca5-d220053e2aa3-kube-api-access-gpmlv\") pod \"redhat-operators-6prjx\" (UID: \"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3\") " pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.235807 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.650690 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6prjx"] Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.723062 4757 generic.go:334] "Generic (PLEG): container finished" podID="865d1515-2b66-4b6e-b670-d01e37c88cac" containerID="c732ed8b57a576c5a6ce54fa91f81c9124266191265a760622b883bd4f46e81d" exitCode=0 Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.723318 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2cq2s" event={"ID":"865d1515-2b66-4b6e-b670-d01e37c88cac","Type":"ContainerDied","Data":"c732ed8b57a576c5a6ce54fa91f81c9124266191265a760622b883bd4f46e81d"} Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.724992 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6prjx" event={"ID":"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3","Type":"ContainerStarted","Data":"cdd64afdbf6dbfb1a1d703bf6962e8ec4dbb98d71a99002a714e2687bb5723cf"} Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.732533 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cmzn" event={"ID":"12bc2599-1396-4296-b78a-d37850977495","Type":"ContainerStarted","Data":"46e5cd3c1c964c3c72c315a44c2af79eec0a6d951e5b329551919a80484e9bf1"} Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.924217 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-slm8x"] Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.925644 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.927737 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.945098 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-slm8x"] Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.995110 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxmw2\" (UniqueName: \"kubernetes.io/projected/4f737994-cd39-4543-ab57-9591a9322823-kube-api-access-bxmw2\") pod \"community-operators-slm8x\" (UID: \"4f737994-cd39-4543-ab57-9591a9322823\") " pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.995214 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f737994-cd39-4543-ab57-9591a9322823-catalog-content\") pod \"community-operators-slm8x\" (UID: \"4f737994-cd39-4543-ab57-9591a9322823\") " pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:29 crc kubenswrapper[4757]: I0129 15:18:29.995325 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f737994-cd39-4543-ab57-9591a9322823-utilities\") pod \"community-operators-slm8x\" (UID: \"4f737994-cd39-4543-ab57-9591a9322823\") " pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.096117 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f737994-cd39-4543-ab57-9591a9322823-utilities\") pod \"community-operators-slm8x\" (UID: \"4f737994-cd39-4543-ab57-9591a9322823\") " pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.096864 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxmw2\" (UniqueName: \"kubernetes.io/projected/4f737994-cd39-4543-ab57-9591a9322823-kube-api-access-bxmw2\") pod \"community-operators-slm8x\" (UID: \"4f737994-cd39-4543-ab57-9591a9322823\") " pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.096776 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f737994-cd39-4543-ab57-9591a9322823-utilities\") pod \"community-operators-slm8x\" (UID: \"4f737994-cd39-4543-ab57-9591a9322823\") " pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.096980 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f737994-cd39-4543-ab57-9591a9322823-catalog-content\") pod \"community-operators-slm8x\" (UID: \"4f737994-cd39-4543-ab57-9591a9322823\") " pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.097342 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f737994-cd39-4543-ab57-9591a9322823-catalog-content\") pod \"community-operators-slm8x\" (UID: \"4f737994-cd39-4543-ab57-9591a9322823\") " pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.116624 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxmw2\" (UniqueName: \"kubernetes.io/projected/4f737994-cd39-4543-ab57-9591a9322823-kube-api-access-bxmw2\") pod \"community-operators-slm8x\" (UID: \"4f737994-cd39-4543-ab57-9591a9322823\") " pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.251533 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.657504 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-slm8x"] Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.739605 4757 generic.go:334] "Generic (PLEG): container finished" podID="96b41b6b-3fb0-4a49-9ca5-d220053e2aa3" containerID="a6a1978a85cb34f0dc2b38d6c000af3cb55a25aae707852990963a842092318a" exitCode=0 Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.740474 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6prjx" event={"ID":"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3","Type":"ContainerDied","Data":"a6a1978a85cb34f0dc2b38d6c000af3cb55a25aae707852990963a842092318a"} Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.745182 4757 generic.go:334] "Generic (PLEG): container finished" podID="12bc2599-1396-4296-b78a-d37850977495" containerID="46e5cd3c1c964c3c72c315a44c2af79eec0a6d951e5b329551919a80484e9bf1" exitCode=0 Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.745314 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cmzn" event={"ID":"12bc2599-1396-4296-b78a-d37850977495","Type":"ContainerDied","Data":"46e5cd3c1c964c3c72c315a44c2af79eec0a6d951e5b329551919a80484e9bf1"} Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.749477 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slm8x" event={"ID":"4f737994-cd39-4543-ab57-9591a9322823","Type":"ContainerStarted","Data":"0bacaedffeef08f17c34d1dc8104915439c19ee0d6d696b49132331d689f757e"} Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.753594 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2cq2s" event={"ID":"865d1515-2b66-4b6e-b670-d01e37c88cac","Type":"ContainerStarted","Data":"eb0778ff6f737f0c9735d7c6dc725ae32ee4ace15ac42a109b2a386ba120ff94"} Jan 29 15:18:30 crc kubenswrapper[4757]: I0129 15:18:30.802902 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2cq2s" podStartSLOduration=2.145451779 podStartE2EDuration="4.802886676s" podCreationTimestamp="2026-01-29 15:18:26 +0000 UTC" firstStartedPulling="2026-01-29 15:18:27.707066957 +0000 UTC m=+470.996317194" lastFinishedPulling="2026-01-29 15:18:30.364501844 +0000 UTC m=+473.653752091" observedRunningTime="2026-01-29 15:18:30.799491477 +0000 UTC m=+474.088741704" watchObservedRunningTime="2026-01-29 15:18:30.802886676 +0000 UTC m=+474.092136913" Jan 29 15:18:31 crc kubenswrapper[4757]: I0129 15:18:31.760672 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cmzn" event={"ID":"12bc2599-1396-4296-b78a-d37850977495","Type":"ContainerStarted","Data":"bd26b507df5e6ce2e7eac1767fa3d23aa2399839cdbc99e33cd212a749535a44"} Jan 29 15:18:31 crc kubenswrapper[4757]: I0129 15:18:31.763040 4757 generic.go:334] "Generic (PLEG): container finished" podID="4f737994-cd39-4543-ab57-9591a9322823" containerID="cd8aee11e29bbd85b6b628084f2af63745e0e793bf80b8b59224548acbfda563" exitCode=0 Jan 29 15:18:31 crc kubenswrapper[4757]: I0129 15:18:31.763087 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slm8x" event={"ID":"4f737994-cd39-4543-ab57-9591a9322823","Type":"ContainerDied","Data":"cd8aee11e29bbd85b6b628084f2af63745e0e793bf80b8b59224548acbfda563"} Jan 29 15:18:31 crc kubenswrapper[4757]: I0129 15:18:31.766894 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6prjx" event={"ID":"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3","Type":"ContainerStarted","Data":"740b1d506c5ac995cdfab2f9832667854aab099bbda27f3f114954c07b00cb26"} Jan 29 15:18:31 crc kubenswrapper[4757]: I0129 15:18:31.792628 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7cmzn" podStartSLOduration=2.305649259 podStartE2EDuration="4.792613194s" podCreationTimestamp="2026-01-29 15:18:27 +0000 UTC" firstStartedPulling="2026-01-29 15:18:28.714872076 +0000 UTC m=+472.004122303" lastFinishedPulling="2026-01-29 15:18:31.201836001 +0000 UTC m=+474.491086238" observedRunningTime="2026-01-29 15:18:31.791136151 +0000 UTC m=+475.080386388" watchObservedRunningTime="2026-01-29 15:18:31.792613194 +0000 UTC m=+475.081863431" Jan 29 15:18:32 crc kubenswrapper[4757]: E0129 15:18:32.194013 4757 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96b41b6b_3fb0_4a49_9ca5_d220053e2aa3.slice/crio-740b1d506c5ac995cdfab2f9832667854aab099bbda27f3f114954c07b00cb26.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96b41b6b_3fb0_4a49_9ca5_d220053e2aa3.slice/crio-conmon-740b1d506c5ac995cdfab2f9832667854aab099bbda27f3f114954c07b00cb26.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:18:32 crc kubenswrapper[4757]: I0129 15:18:32.773045 4757 generic.go:334] "Generic (PLEG): container finished" podID="4f737994-cd39-4543-ab57-9591a9322823" containerID="f76f6ce83129eef19208f7855142e17047c251194c7df17714d1e8f03e5b60ef" exitCode=0 Jan 29 15:18:32 crc kubenswrapper[4757]: I0129 15:18:32.773111 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slm8x" event={"ID":"4f737994-cd39-4543-ab57-9591a9322823","Type":"ContainerDied","Data":"f76f6ce83129eef19208f7855142e17047c251194c7df17714d1e8f03e5b60ef"} Jan 29 15:18:32 crc kubenswrapper[4757]: I0129 15:18:32.780137 4757 generic.go:334] "Generic (PLEG): container finished" podID="96b41b6b-3fb0-4a49-9ca5-d220053e2aa3" containerID="740b1d506c5ac995cdfab2f9832667854aab099bbda27f3f114954c07b00cb26" exitCode=0 Jan 29 15:18:32 crc kubenswrapper[4757]: I0129 15:18:32.780184 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6prjx" event={"ID":"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3","Type":"ContainerDied","Data":"740b1d506c5ac995cdfab2f9832667854aab099bbda27f3f114954c07b00cb26"} Jan 29 15:18:33 crc kubenswrapper[4757]: I0129 15:18:33.786435 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-slm8x" event={"ID":"4f737994-cd39-4543-ab57-9591a9322823","Type":"ContainerStarted","Data":"a69cf7731ac6edf2f053e4b0472b1522780d311e791a0c8511ead6b957f43a47"} Jan 29 15:18:33 crc kubenswrapper[4757]: I0129 15:18:33.788649 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6prjx" event={"ID":"96b41b6b-3fb0-4a49-9ca5-d220053e2aa3","Type":"ContainerStarted","Data":"e43f9c194a78c6252f8bd2d68154612b0b28e7bb243c73a7ccbd4bd3e73440cd"} Jan 29 15:18:33 crc kubenswrapper[4757]: I0129 15:18:33.801275 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-slm8x" podStartSLOduration=3.398165491 podStartE2EDuration="4.801249066s" podCreationTimestamp="2026-01-29 15:18:29 +0000 UTC" firstStartedPulling="2026-01-29 15:18:31.764602093 +0000 UTC m=+475.053852330" lastFinishedPulling="2026-01-29 15:18:33.167685668 +0000 UTC m=+476.456935905" observedRunningTime="2026-01-29 15:18:33.800083552 +0000 UTC m=+477.089333789" watchObservedRunningTime="2026-01-29 15:18:33.801249066 +0000 UTC m=+477.090499303" Jan 29 15:18:33 crc kubenswrapper[4757]: I0129 15:18:33.826706 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6prjx" podStartSLOduration=3.131130236 podStartE2EDuration="5.826689192s" podCreationTimestamp="2026-01-29 15:18:28 +0000 UTC" firstStartedPulling="2026-01-29 15:18:30.741812744 +0000 UTC m=+474.031062981" lastFinishedPulling="2026-01-29 15:18:33.4373717 +0000 UTC m=+476.726621937" observedRunningTime="2026-01-29 15:18:33.825861798 +0000 UTC m=+477.115112035" watchObservedRunningTime="2026-01-29 15:18:33.826689192 +0000 UTC m=+477.115939429" Jan 29 15:18:36 crc kubenswrapper[4757]: I0129 15:18:36.838657 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:36 crc kubenswrapper[4757]: I0129 15:18:36.838937 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:36 crc kubenswrapper[4757]: I0129 15:18:36.876944 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:37 crc kubenswrapper[4757]: I0129 15:18:37.844306 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:37 crc kubenswrapper[4757]: I0129 15:18:37.845339 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:37 crc kubenswrapper[4757]: I0129 15:18:37.849062 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2cq2s" Jan 29 15:18:37 crc kubenswrapper[4757]: I0129 15:18:37.902414 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:38 crc kubenswrapper[4757]: I0129 15:18:38.859909 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7cmzn" Jan 29 15:18:39 crc kubenswrapper[4757]: I0129 15:18:39.236871 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:39 crc kubenswrapper[4757]: I0129 15:18:39.237373 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:40 crc kubenswrapper[4757]: I0129 15:18:40.252572 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:40 crc kubenswrapper[4757]: I0129 15:18:40.253850 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:40 crc kubenswrapper[4757]: I0129 15:18:40.291345 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6prjx" podUID="96b41b6b-3fb0-4a49-9ca5-d220053e2aa3" containerName="registry-server" probeResult="failure" output=< Jan 29 15:18:40 crc kubenswrapper[4757]: timeout: failed to connect service ":50051" within 1s Jan 29 15:18:40 crc kubenswrapper[4757]: > Jan 29 15:18:40 crc kubenswrapper[4757]: I0129 15:18:40.294158 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:40 crc kubenswrapper[4757]: I0129 15:18:40.856540 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-slm8x" Jan 29 15:18:49 crc kubenswrapper[4757]: I0129 15:18:49.294145 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:18:49 crc kubenswrapper[4757]: I0129 15:18:49.335965 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6prjx" Jan 29 15:20:17 crc kubenswrapper[4757]: I0129 15:20:17.605076 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:20:17 crc kubenswrapper[4757]: I0129 15:20:17.606439 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:20:47 crc kubenswrapper[4757]: I0129 15:20:47.605675 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:20:47 crc kubenswrapper[4757]: I0129 15:20:47.606243 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:21:17 crc kubenswrapper[4757]: I0129 15:21:17.605137 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:21:17 crc kubenswrapper[4757]: I0129 15:21:17.605700 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:21:17 crc kubenswrapper[4757]: I0129 15:21:17.605748 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:21:17 crc kubenswrapper[4757]: I0129 15:21:17.606390 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6098b7f5130ded36e34d2b58124793f458af5b996fce28a164fa5b8bbd1a2dbd"} pod="openshift-machine-config-operator/machine-config-daemon-45q8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:21:17 crc kubenswrapper[4757]: I0129 15:21:17.606452 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" containerID="cri-o://6098b7f5130ded36e34d2b58124793f458af5b996fce28a164fa5b8bbd1a2dbd" gracePeriod=600 Jan 29 15:21:17 crc kubenswrapper[4757]: I0129 15:21:17.897539 4757 generic.go:334] "Generic (PLEG): container finished" podID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerID="6098b7f5130ded36e34d2b58124793f458af5b996fce28a164fa5b8bbd1a2dbd" exitCode=0 Jan 29 15:21:17 crc kubenswrapper[4757]: I0129 15:21:17.897585 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerDied","Data":"6098b7f5130ded36e34d2b58124793f458af5b996fce28a164fa5b8bbd1a2dbd"} Jan 29 15:21:17 crc kubenswrapper[4757]: I0129 15:21:17.897622 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"174440c846854cb49768c9b08f3011bcfb796de0989f3816b5db8245b48df983"} Jan 29 15:21:17 crc kubenswrapper[4757]: I0129 15:21:17.897643 4757 scope.go:117] "RemoveContainer" containerID="989f6c946474d5c13e79a0e6cd5a831a42488fc707f84bbd376773aebb6df314" Jan 29 15:21:21 crc kubenswrapper[4757]: I0129 15:21:21.030794 4757 scope.go:117] "RemoveContainer" containerID="9fe6fd9260cc4c532c528ffd70cf74beff48b61dbd2f1a53ef74ca7d0ac89e1d" Jan 29 15:22:21 crc kubenswrapper[4757]: I0129 15:22:21.068179 4757 scope.go:117] "RemoveContainer" containerID="d542e04c40e7d0b32e9e711cd380167b06168e58c35f423be4af5c3c62e85e20" Jan 29 15:22:21 crc kubenswrapper[4757]: I0129 15:22:21.083613 4757 scope.go:117] "RemoveContainer" containerID="412210532c444f47f8c06c84f5caf48ca48302b78cd3249b5e2656d1ef2329d1" Jan 29 15:23:16 crc kubenswrapper[4757]: I0129 15:23:16.474050 4757 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 15:23:17 crc kubenswrapper[4757]: I0129 15:23:17.604824 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:23:17 crc kubenswrapper[4757]: I0129 15:23:17.605618 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:23:47 crc kubenswrapper[4757]: I0129 15:23:47.605358 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:23:47 crc kubenswrapper[4757]: I0129 15:23:47.605921 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:24:17 crc kubenswrapper[4757]: I0129 15:24:17.604927 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:24:17 crc kubenswrapper[4757]: I0129 15:24:17.605504 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:24:17 crc kubenswrapper[4757]: I0129 15:24:17.605551 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:24:17 crc kubenswrapper[4757]: I0129 15:24:17.606121 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"174440c846854cb49768c9b08f3011bcfb796de0989f3816b5db8245b48df983"} pod="openshift-machine-config-operator/machine-config-daemon-45q8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:24:17 crc kubenswrapper[4757]: I0129 15:24:17.606172 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" containerID="cri-o://174440c846854cb49768c9b08f3011bcfb796de0989f3816b5db8245b48df983" gracePeriod=600 Jan 29 15:24:17 crc kubenswrapper[4757]: I0129 15:24:17.929437 4757 generic.go:334] "Generic (PLEG): container finished" podID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerID="174440c846854cb49768c9b08f3011bcfb796de0989f3816b5db8245b48df983" exitCode=0 Jan 29 15:24:17 crc kubenswrapper[4757]: I0129 15:24:17.929516 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerDied","Data":"174440c846854cb49768c9b08f3011bcfb796de0989f3816b5db8245b48df983"} Jan 29 15:24:17 crc kubenswrapper[4757]: I0129 15:24:17.930057 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"42c07bced4a4cc16b1156866e5aecd360caa654a1d5d2eaf998a21db73871643"} Jan 29 15:24:17 crc kubenswrapper[4757]: I0129 15:24:17.930126 4757 scope.go:117] "RemoveContainer" containerID="6098b7f5130ded36e34d2b58124793f458af5b996fce28a164fa5b8bbd1a2dbd" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.191507 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wvl6f"] Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.192977 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wvl6f" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.193654 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67584\" (UniqueName: \"kubernetes.io/projected/c460cf1a-344b-4096-b5c7-187f4083d2c1-kube-api-access-67584\") pod \"cert-manager-cainjector-cf98fcc89-wvl6f\" (UID: \"c460cf1a-344b-4096-b5c7-187f4083d2c1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wvl6f" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.204787 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-pdsxk"] Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.205418 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-pdsxk" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.206455 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.206584 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.206665 4757 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-dtrsk" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.209127 4757 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lzhxt" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.219710 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wvl6f"] Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.241190 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-pdsxk"] Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.250426 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bljgl"] Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.251220 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bljgl" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.253492 4757 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-gw6kd" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.276198 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bljgl"] Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.296815 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67584\" (UniqueName: \"kubernetes.io/projected/c460cf1a-344b-4096-b5c7-187f4083d2c1-kube-api-access-67584\") pod \"cert-manager-cainjector-cf98fcc89-wvl6f\" (UID: \"c460cf1a-344b-4096-b5c7-187f4083d2c1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wvl6f" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.317252 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67584\" (UniqueName: \"kubernetes.io/projected/c460cf1a-344b-4096-b5c7-187f4083d2c1-kube-api-access-67584\") pod \"cert-manager-cainjector-cf98fcc89-wvl6f\" (UID: \"c460cf1a-344b-4096-b5c7-187f4083d2c1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-wvl6f" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.398372 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pkrj\" (UniqueName: \"kubernetes.io/projected/b8c4b13b-d870-4731-a95f-c0a3b7d1f896-kube-api-access-8pkrj\") pod \"cert-manager-858654f9db-pdsxk\" (UID: \"b8c4b13b-d870-4731-a95f-c0a3b7d1f896\") " pod="cert-manager/cert-manager-858654f9db-pdsxk" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.398431 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sbzm\" (UniqueName: \"kubernetes.io/projected/a4dc59f9-ba72-46db-be8b-f83bf7c99b8a-kube-api-access-2sbzm\") pod \"cert-manager-webhook-687f57d79b-bljgl\" (UID: \"a4dc59f9-ba72-46db-be8b-f83bf7c99b8a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bljgl" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.499385 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sbzm\" (UniqueName: \"kubernetes.io/projected/a4dc59f9-ba72-46db-be8b-f83bf7c99b8a-kube-api-access-2sbzm\") pod \"cert-manager-webhook-687f57d79b-bljgl\" (UID: \"a4dc59f9-ba72-46db-be8b-f83bf7c99b8a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bljgl" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.499506 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pkrj\" (UniqueName: \"kubernetes.io/projected/b8c4b13b-d870-4731-a95f-c0a3b7d1f896-kube-api-access-8pkrj\") pod \"cert-manager-858654f9db-pdsxk\" (UID: \"b8c4b13b-d870-4731-a95f-c0a3b7d1f896\") " pod="cert-manager/cert-manager-858654f9db-pdsxk" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.507262 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wvl6f" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.516104 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sbzm\" (UniqueName: \"kubernetes.io/projected/a4dc59f9-ba72-46db-be8b-f83bf7c99b8a-kube-api-access-2sbzm\") pod \"cert-manager-webhook-687f57d79b-bljgl\" (UID: \"a4dc59f9-ba72-46db-be8b-f83bf7c99b8a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-bljgl" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.519252 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pkrj\" (UniqueName: \"kubernetes.io/projected/b8c4b13b-d870-4731-a95f-c0a3b7d1f896-kube-api-access-8pkrj\") pod \"cert-manager-858654f9db-pdsxk\" (UID: \"b8c4b13b-d870-4731-a95f-c0a3b7d1f896\") " pod="cert-manager/cert-manager-858654f9db-pdsxk" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.520590 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-pdsxk" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.567340 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-bljgl" Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.724915 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-wvl6f"] Jan 29 15:25:47 crc kubenswrapper[4757]: W0129 15:25:47.743442 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc460cf1a_344b_4096_b5c7_187f4083d2c1.slice/crio-ee2e040b4cd05e5ea9c6c9401f12e6790adb8a80f8377259fcedc9241f92bae6 WatchSource:0}: Error finding container ee2e040b4cd05e5ea9c6c9401f12e6790adb8a80f8377259fcedc9241f92bae6: Status 404 returned error can't find the container with id ee2e040b4cd05e5ea9c6c9401f12e6790adb8a80f8377259fcedc9241f92bae6 Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.753714 4757 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:25:47 crc kubenswrapper[4757]: I0129 15:25:47.794361 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-pdsxk"] Jan 29 15:25:47 crc kubenswrapper[4757]: W0129 15:25:47.797517 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8c4b13b_d870_4731_a95f_c0a3b7d1f896.slice/crio-27e0051b0a16088975a0e4064da8b4072bd1e9e831da7c14d258afcb88f93f2f WatchSource:0}: Error finding container 27e0051b0a16088975a0e4064da8b4072bd1e9e831da7c14d258afcb88f93f2f: Status 404 returned error can't find the container with id 27e0051b0a16088975a0e4064da8b4072bd1e9e831da7c14d258afcb88f93f2f Jan 29 15:25:48 crc kubenswrapper[4757]: I0129 15:25:48.060085 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-bljgl"] Jan 29 15:25:48 crc kubenswrapper[4757]: W0129 15:25:48.064131 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4dc59f9_ba72_46db_be8b_f83bf7c99b8a.slice/crio-5507ac23d16e9d5ce8f27003562c85a1a970312c3d58fb655d6ecc8aecafacf2 WatchSource:0}: Error finding container 5507ac23d16e9d5ce8f27003562c85a1a970312c3d58fb655d6ecc8aecafacf2: Status 404 returned error can't find the container with id 5507ac23d16e9d5ce8f27003562c85a1a970312c3d58fb655d6ecc8aecafacf2 Jan 29 15:25:48 crc kubenswrapper[4757]: I0129 15:25:48.450736 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wvl6f" event={"ID":"c460cf1a-344b-4096-b5c7-187f4083d2c1","Type":"ContainerStarted","Data":"ee2e040b4cd05e5ea9c6c9401f12e6790adb8a80f8377259fcedc9241f92bae6"} Jan 29 15:25:48 crc kubenswrapper[4757]: I0129 15:25:48.452197 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bljgl" event={"ID":"a4dc59f9-ba72-46db-be8b-f83bf7c99b8a","Type":"ContainerStarted","Data":"5507ac23d16e9d5ce8f27003562c85a1a970312c3d58fb655d6ecc8aecafacf2"} Jan 29 15:25:48 crc kubenswrapper[4757]: I0129 15:25:48.453211 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-pdsxk" event={"ID":"b8c4b13b-d870-4731-a95f-c0a3b7d1f896","Type":"ContainerStarted","Data":"27e0051b0a16088975a0e4064da8b4072bd1e9e831da7c14d258afcb88f93f2f"} Jan 29 15:25:52 crc kubenswrapper[4757]: I0129 15:25:52.475724 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wvl6f" event={"ID":"c460cf1a-344b-4096-b5c7-187f4083d2c1","Type":"ContainerStarted","Data":"a27a6cb8d9112086b307396c435feab3b7c15a1cf1d0f0cdcba127eb7472e13a"} Jan 29 15:25:52 crc kubenswrapper[4757]: I0129 15:25:52.477851 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-bljgl" event={"ID":"a4dc59f9-ba72-46db-be8b-f83bf7c99b8a","Type":"ContainerStarted","Data":"a343addad39a9ee3b4b2e7e91b34894dfdab9cc13b5a22d97ae14dae13dde928"} Jan 29 15:25:52 crc kubenswrapper[4757]: I0129 15:25:52.477956 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-bljgl" Jan 29 15:25:52 crc kubenswrapper[4757]: I0129 15:25:52.479672 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-pdsxk" event={"ID":"b8c4b13b-d870-4731-a95f-c0a3b7d1f896","Type":"ContainerStarted","Data":"178f98b47cc38a7143a105f0e4f7cbbcb7debbd99c058c4054727d5fe31e6915"} Jan 29 15:25:52 crc kubenswrapper[4757]: I0129 15:25:52.500379 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-wvl6f" podStartSLOduration=1.775018768 podStartE2EDuration="5.500351057s" podCreationTimestamp="2026-01-29 15:25:47 +0000 UTC" firstStartedPulling="2026-01-29 15:25:47.753523374 +0000 UTC m=+911.042773611" lastFinishedPulling="2026-01-29 15:25:51.478855663 +0000 UTC m=+914.768105900" observedRunningTime="2026-01-29 15:25:52.498677008 +0000 UTC m=+915.787927255" watchObservedRunningTime="2026-01-29 15:25:52.500351057 +0000 UTC m=+915.789601304" Jan 29 15:25:52 crc kubenswrapper[4757]: I0129 15:25:52.531732 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-pdsxk" podStartSLOduration=1.712795192 podStartE2EDuration="5.531709132s" podCreationTimestamp="2026-01-29 15:25:47 +0000 UTC" firstStartedPulling="2026-01-29 15:25:47.799365902 +0000 UTC m=+911.088616139" lastFinishedPulling="2026-01-29 15:25:51.618279842 +0000 UTC m=+914.907530079" observedRunningTime="2026-01-29 15:25:52.526758178 +0000 UTC m=+915.816008435" watchObservedRunningTime="2026-01-29 15:25:52.531709132 +0000 UTC m=+915.820959379" Jan 29 15:25:52 crc kubenswrapper[4757]: I0129 15:25:52.556818 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-bljgl" podStartSLOduration=2.057105941 podStartE2EDuration="5.556795344s" podCreationTimestamp="2026-01-29 15:25:47 +0000 UTC" firstStartedPulling="2026-01-29 15:25:48.065757267 +0000 UTC m=+911.355007504" lastFinishedPulling="2026-01-29 15:25:51.56544665 +0000 UTC m=+914.854696907" observedRunningTime="2026-01-29 15:25:52.553821468 +0000 UTC m=+915.843071735" watchObservedRunningTime="2026-01-29 15:25:52.556795344 +0000 UTC m=+915.846045581" Jan 29 15:25:56 crc kubenswrapper[4757]: I0129 15:25:56.641871 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8fwvd"] Jan 29 15:25:56 crc kubenswrapper[4757]: I0129 15:25:56.642602 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovn-controller" containerID="cri-o://253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa" gracePeriod=30 Jan 29 15:25:56 crc kubenswrapper[4757]: I0129 15:25:56.642995 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="sbdb" containerID="cri-o://845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9" gracePeriod=30 Jan 29 15:25:56 crc kubenswrapper[4757]: I0129 15:25:56.643040 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="nbdb" containerID="cri-o://bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1" gracePeriod=30 Jan 29 15:25:56 crc kubenswrapper[4757]: I0129 15:25:56.643079 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="northd" containerID="cri-o://c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02" gracePeriod=30 Jan 29 15:25:56 crc kubenswrapper[4757]: I0129 15:25:56.643120 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323" gracePeriod=30 Jan 29 15:25:56 crc kubenswrapper[4757]: I0129 15:25:56.643157 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="kube-rbac-proxy-node" containerID="cri-o://e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407" gracePeriod=30 Jan 29 15:25:56 crc kubenswrapper[4757]: I0129 15:25:56.643196 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovn-acl-logging" containerID="cri-o://4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd" gracePeriod=30 Jan 29 15:25:56 crc kubenswrapper[4757]: I0129 15:25:56.669206 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" containerID="cri-o://d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe" gracePeriod=30 Jan 29 15:25:56 crc kubenswrapper[4757]: I0129 15:25:56.975409 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/3.log" Jan 29 15:25:56 crc kubenswrapper[4757]: I0129 15:25:56.977763 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovn-acl-logging/0.log" Jan 29 15:25:56 crc kubenswrapper[4757]: I0129 15:25:56.978200 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovn-controller/0.log" Jan 29 15:25:56 crc kubenswrapper[4757]: I0129 15:25:56.978637 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.034598 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-etc-openvswitch\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.034653 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovn-node-metrics-cert\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.034682 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-run-netns\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.034707 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-openvswitch\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.034755 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.034790 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovnkube-config\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.034815 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-log-socket\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.034862 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-var-lib-openvswitch\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.034897 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-ovn\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.034938 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-systemd-units\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.034967 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-systemd\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035001 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-node-log\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035027 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-env-overrides\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035084 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovnkube-script-lib\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035120 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zhhj\" (UniqueName: \"kubernetes.io/projected/e6815a1b-56eb-4075-84ae-1af5d0dcb742-kube-api-access-5zhhj\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035145 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-run-ovn-kubernetes\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035165 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-kubelet\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035186 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-slash\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035211 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-cni-bin\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035231 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-cni-netd\") pod \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\" (UID: \"e6815a1b-56eb-4075-84ae-1af5d0dcb742\") " Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035485 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035554 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035563 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035608 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-slash" (OuterVolumeSpecName: "host-slash") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035600 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035633 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035670 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035702 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035727 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035751 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035777 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035800 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-log-socket" (OuterVolumeSpecName: "log-socket") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035824 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.035848 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-node-log" (OuterVolumeSpecName: "node-log") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036060 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bw6hd"] Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036186 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036507 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.036299 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036668 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.036690 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="nbdb" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036699 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="nbdb" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.036720 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036728 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.036742 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovn-acl-logging" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036752 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovn-acl-logging" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.036762 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="northd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036770 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="northd" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.036780 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="kubecfg-setup" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036788 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="kubecfg-setup" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.036805 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036814 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.036826 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="sbdb" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036836 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="sbdb" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.036852 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovn-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036861 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovn-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.036873 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036881 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.036893 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="kube-rbac-proxy-node" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.036901 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="kube-rbac-proxy-node" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037410 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovn-acl-logging" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037436 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovn-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037448 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037457 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037469 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037479 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="nbdb" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037532 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037560 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="sbdb" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037572 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037584 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="northd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037593 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037599 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="kube-rbac-proxy-node" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.037739 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037750 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037857 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.037942 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.037952 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerName="ovnkube-controller" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.039334 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.040717 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.040940 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6815a1b-56eb-4075-84ae-1af5d0dcb742-kube-api-access-5zhhj" (OuterVolumeSpecName: "kube-api-access-5zhhj") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "kube-api-access-5zhhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.052003 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e6815a1b-56eb-4075-84ae-1af5d0dcb742" (UID: "e6815a1b-56eb-4075-84ae-1af5d0dcb742"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.135899 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.135965 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-systemd-units\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136001 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-var-lib-openvswitch\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136034 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-slash\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136062 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-run-netns\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136091 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-node-log\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136221 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9d72b0c-2179-402a-b204-33e764ab2f50-env-overrides\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136307 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9d72b0c-2179-402a-b204-33e764ab2f50-ovnkube-config\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136337 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-run-openvswitch\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136388 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc4pf\" (UniqueName: \"kubernetes.io/projected/c9d72b0c-2179-402a-b204-33e764ab2f50-kube-api-access-wc4pf\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136424 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-kubelet\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136461 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-cni-netd\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136498 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-log-socket\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136683 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-run-ovn\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136798 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-cni-bin\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136882 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-run-systemd\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.136949 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9d72b0c-2179-402a-b204-33e764ab2f50-ovn-node-metrics-cert\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137047 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c9d72b0c-2179-402a-b204-33e764ab2f50-ovnkube-script-lib\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137160 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-run-ovn-kubernetes\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137259 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-etc-openvswitch\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137381 4757 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137402 4757 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137419 4757 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137437 4757 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137488 4757 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137504 4757 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137525 4757 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137543 4757 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137561 4757 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137578 4757 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137594 4757 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137610 4757 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137626 4757 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137642 4757 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137658 4757 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137674 4757 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e6815a1b-56eb-4075-84ae-1af5d0dcb742-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137690 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zhhj\" (UniqueName: \"kubernetes.io/projected/e6815a1b-56eb-4075-84ae-1af5d0dcb742-kube-api-access-5zhhj\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137708 4757 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137724 4757 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.137739 4757 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6815a1b-56eb-4075-84ae-1af5d0dcb742-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239123 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-run-ovn\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239201 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-log-socket\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239230 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-cni-bin\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239299 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-run-systemd\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239332 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9d72b0c-2179-402a-b204-33e764ab2f50-ovn-node-metrics-cert\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239380 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c9d72b0c-2179-402a-b204-33e764ab2f50-ovnkube-script-lib\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239399 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-log-socket\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239399 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-cni-bin\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239424 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-run-ovn-kubernetes\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239484 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-run-ovn-kubernetes\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239566 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-etc-openvswitch\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239614 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239618 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-run-ovn\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239683 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-systemd-units\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239651 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-systemd-units\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239732 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239740 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-etc-openvswitch\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239771 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-var-lib-openvswitch\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239736 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-var-lib-openvswitch\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239857 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-slash\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239396 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-run-systemd\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239904 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-run-netns\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239930 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-slash\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239951 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-node-log\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.239963 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-run-netns\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.240020 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9d72b0c-2179-402a-b204-33e764ab2f50-env-overrides\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.240052 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9d72b0c-2179-402a-b204-33e764ab2f50-ovnkube-config\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.240078 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-run-openvswitch\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.240121 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc4pf\" (UniqueName: \"kubernetes.io/projected/c9d72b0c-2179-402a-b204-33e764ab2f50-kube-api-access-wc4pf\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.240147 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-kubelet\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.240204 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-cni-netd\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.240309 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-cni-netd\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.240014 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-node-log\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.240828 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-host-kubelet\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.240826 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c9d72b0c-2179-402a-b204-33e764ab2f50-run-openvswitch\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.240890 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9d72b0c-2179-402a-b204-33e764ab2f50-ovnkube-config\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.240892 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9d72b0c-2179-402a-b204-33e764ab2f50-env-overrides\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.240937 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c9d72b0c-2179-402a-b204-33e764ab2f50-ovnkube-script-lib\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.245771 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9d72b0c-2179-402a-b204-33e764ab2f50-ovn-node-metrics-cert\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.260851 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc4pf\" (UniqueName: \"kubernetes.io/projected/c9d72b0c-2179-402a-b204-33e764ab2f50-kube-api-access-wc4pf\") pod \"ovnkube-node-bw6hd\" (UID: \"c9d72b0c-2179-402a-b204-33e764ab2f50\") " pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.353748 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.507547 4757 generic.go:334] "Generic (PLEG): container finished" podID="c9d72b0c-2179-402a-b204-33e764ab2f50" containerID="8974404e80c5cc646ad7fed84b969d06fca17a83b5049f35725ba87cdd5dfcb3" exitCode=0 Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.507630 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" event={"ID":"c9d72b0c-2179-402a-b204-33e764ab2f50","Type":"ContainerDied","Data":"8974404e80c5cc646ad7fed84b969d06fca17a83b5049f35725ba87cdd5dfcb3"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.507692 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" event={"ID":"c9d72b0c-2179-402a-b204-33e764ab2f50","Type":"ContainerStarted","Data":"948a65db34fb54b201f03f9f2c315c26366332c3619d794b68710a8933e055b1"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.509473 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bcbdt_fe6866d7-5a43-46d5-ba84-264847f9cd30/kube-multus/2.log" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.510807 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bcbdt_fe6866d7-5a43-46d5-ba84-264847f9cd30/kube-multus/1.log" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.510962 4757 generic.go:334] "Generic (PLEG): container finished" podID="fe6866d7-5a43-46d5-ba84-264847f9cd30" containerID="859df83d243d00747696baf633188d0927d51a4929ba5fc0bb8c0ad484d17f9d" exitCode=2 Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.511089 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bcbdt" event={"ID":"fe6866d7-5a43-46d5-ba84-264847f9cd30","Type":"ContainerDied","Data":"859df83d243d00747696baf633188d0927d51a4929ba5fc0bb8c0ad484d17f9d"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.511195 4757 scope.go:117] "RemoveContainer" containerID="06723594ec631b4e23ea44dab6453e705a548052738d6da15ae230b788e10933" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.511683 4757 scope.go:117] "RemoveContainer" containerID="859df83d243d00747696baf633188d0927d51a4929ba5fc0bb8c0ad484d17f9d" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.517024 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovnkube-controller/3.log" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.519693 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovn-acl-logging/0.log" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520313 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8fwvd_e6815a1b-56eb-4075-84ae-1af5d0dcb742/ovn-controller/0.log" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520706 4757 generic.go:334] "Generic (PLEG): container finished" podID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerID="d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe" exitCode=0 Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520735 4757 generic.go:334] "Generic (PLEG): container finished" podID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerID="845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9" exitCode=0 Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520745 4757 generic.go:334] "Generic (PLEG): container finished" podID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerID="bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1" exitCode=0 Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520755 4757 generic.go:334] "Generic (PLEG): container finished" podID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerID="c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02" exitCode=0 Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520764 4757 generic.go:334] "Generic (PLEG): container finished" podID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerID="87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323" exitCode=0 Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520773 4757 generic.go:334] "Generic (PLEG): container finished" podID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerID="e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407" exitCode=0 Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520788 4757 generic.go:334] "Generic (PLEG): container finished" podID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerID="4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd" exitCode=143 Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520797 4757 generic.go:334] "Generic (PLEG): container finished" podID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" containerID="253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa" exitCode=143 Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520817 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520863 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520887 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520869 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520902 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520917 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520933 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520950 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520965 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520972 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520979 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520990 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.520998 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521004 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521012 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521019 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521025 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521035 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521048 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521056 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521063 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521070 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521077 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521084 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521092 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521099 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521106 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521113 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521122 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521134 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521142 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521149 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521157 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521164 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521170 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521177 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521184 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521191 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521198 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521208 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8fwvd" event={"ID":"e6815a1b-56eb-4075-84ae-1af5d0dcb742","Type":"ContainerDied","Data":"a6d8043f83fa78c26bbeae3a6dd10dc81f4827963e795853d866a7b857c693e1"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521220 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521228 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521293 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521303 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521310 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521316 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521323 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521331 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521337 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.521344 4757 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c"} Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.565353 4757 scope.go:117] "RemoveContainer" containerID="d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.578288 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-bljgl" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.600402 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8fwvd"] Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.601759 4757 scope.go:117] "RemoveContainer" containerID="5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.618379 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8fwvd"] Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.635992 4757 scope.go:117] "RemoveContainer" containerID="845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.654737 4757 scope.go:117] "RemoveContainer" containerID="bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.669044 4757 scope.go:117] "RemoveContainer" containerID="c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.690712 4757 scope.go:117] "RemoveContainer" containerID="87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.719586 4757 scope.go:117] "RemoveContainer" containerID="e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.738407 4757 scope.go:117] "RemoveContainer" containerID="4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.761909 4757 scope.go:117] "RemoveContainer" containerID="253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.807034 4757 scope.go:117] "RemoveContainer" containerID="30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.825672 4757 scope.go:117] "RemoveContainer" containerID="d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.826383 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe\": container with ID starting with d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe not found: ID does not exist" containerID="d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.826428 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe"} err="failed to get container status \"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe\": rpc error: code = NotFound desc = could not find container \"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe\": container with ID starting with d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.826454 4757 scope.go:117] "RemoveContainer" containerID="5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.826833 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\": container with ID starting with 5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee not found: ID does not exist" containerID="5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.826867 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee"} err="failed to get container status \"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\": rpc error: code = NotFound desc = could not find container \"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\": container with ID starting with 5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.826889 4757 scope.go:117] "RemoveContainer" containerID="845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.827172 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\": container with ID starting with 845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9 not found: ID does not exist" containerID="845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.827220 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9"} err="failed to get container status \"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\": rpc error: code = NotFound desc = could not find container \"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\": container with ID starting with 845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.827247 4757 scope.go:117] "RemoveContainer" containerID="bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.827543 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\": container with ID starting with bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1 not found: ID does not exist" containerID="bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.827562 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1"} err="failed to get container status \"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\": rpc error: code = NotFound desc = could not find container \"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\": container with ID starting with bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.827575 4757 scope.go:117] "RemoveContainer" containerID="c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.827804 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\": container with ID starting with c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02 not found: ID does not exist" containerID="c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.827823 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02"} err="failed to get container status \"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\": rpc error: code = NotFound desc = could not find container \"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\": container with ID starting with c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.827835 4757 scope.go:117] "RemoveContainer" containerID="87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.828533 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\": container with ID starting with 87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323 not found: ID does not exist" containerID="87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.828606 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323"} err="failed to get container status \"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\": rpc error: code = NotFound desc = could not find container \"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\": container with ID starting with 87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.828628 4757 scope.go:117] "RemoveContainer" containerID="e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.828927 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\": container with ID starting with e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407 not found: ID does not exist" containerID="e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.828952 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407"} err="failed to get container status \"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\": rpc error: code = NotFound desc = could not find container \"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\": container with ID starting with e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.828965 4757 scope.go:117] "RemoveContainer" containerID="4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.829313 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\": container with ID starting with 4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd not found: ID does not exist" containerID="4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.829392 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd"} err="failed to get container status \"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\": rpc error: code = NotFound desc = could not find container \"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\": container with ID starting with 4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.829409 4757 scope.go:117] "RemoveContainer" containerID="253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.829696 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\": container with ID starting with 253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa not found: ID does not exist" containerID="253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.829726 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa"} err="failed to get container status \"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\": rpc error: code = NotFound desc = could not find container \"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\": container with ID starting with 253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.829744 4757 scope.go:117] "RemoveContainer" containerID="30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c" Jan 29 15:25:57 crc kubenswrapper[4757]: E0129 15:25:57.829995 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\": container with ID starting with 30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c not found: ID does not exist" containerID="30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.830043 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c"} err="failed to get container status \"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\": rpc error: code = NotFound desc = could not find container \"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\": container with ID starting with 30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.830062 4757 scope.go:117] "RemoveContainer" containerID="d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.830316 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe"} err="failed to get container status \"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe\": rpc error: code = NotFound desc = could not find container \"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe\": container with ID starting with d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.830338 4757 scope.go:117] "RemoveContainer" containerID="5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.830576 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee"} err="failed to get container status \"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\": rpc error: code = NotFound desc = could not find container \"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\": container with ID starting with 5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.830604 4757 scope.go:117] "RemoveContainer" containerID="845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.830854 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9"} err="failed to get container status \"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\": rpc error: code = NotFound desc = could not find container \"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\": container with ID starting with 845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.830877 4757 scope.go:117] "RemoveContainer" containerID="bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.832233 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1"} err="failed to get container status \"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\": rpc error: code = NotFound desc = could not find container \"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\": container with ID starting with bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.832255 4757 scope.go:117] "RemoveContainer" containerID="c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.832624 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02"} err="failed to get container status \"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\": rpc error: code = NotFound desc = could not find container \"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\": container with ID starting with c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.832657 4757 scope.go:117] "RemoveContainer" containerID="87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.833021 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323"} err="failed to get container status \"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\": rpc error: code = NotFound desc = could not find container \"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\": container with ID starting with 87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.833052 4757 scope.go:117] "RemoveContainer" containerID="e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.833328 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407"} err="failed to get container status \"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\": rpc error: code = NotFound desc = could not find container \"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\": container with ID starting with e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.833349 4757 scope.go:117] "RemoveContainer" containerID="4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.833604 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd"} err="failed to get container status \"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\": rpc error: code = NotFound desc = could not find container \"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\": container with ID starting with 4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.833639 4757 scope.go:117] "RemoveContainer" containerID="253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.834020 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa"} err="failed to get container status \"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\": rpc error: code = NotFound desc = could not find container \"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\": container with ID starting with 253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.834053 4757 scope.go:117] "RemoveContainer" containerID="30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.834378 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c"} err="failed to get container status \"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\": rpc error: code = NotFound desc = could not find container \"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\": container with ID starting with 30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.834411 4757 scope.go:117] "RemoveContainer" containerID="d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.834695 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe"} err="failed to get container status \"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe\": rpc error: code = NotFound desc = could not find container \"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe\": container with ID starting with d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.834720 4757 scope.go:117] "RemoveContainer" containerID="5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.834977 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee"} err="failed to get container status \"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\": rpc error: code = NotFound desc = could not find container \"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\": container with ID starting with 5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.834998 4757 scope.go:117] "RemoveContainer" containerID="845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.835254 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9"} err="failed to get container status \"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\": rpc error: code = NotFound desc = could not find container \"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\": container with ID starting with 845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.835331 4757 scope.go:117] "RemoveContainer" containerID="bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.835556 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1"} err="failed to get container status \"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\": rpc error: code = NotFound desc = could not find container \"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\": container with ID starting with bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.835589 4757 scope.go:117] "RemoveContainer" containerID="c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.835798 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02"} err="failed to get container status \"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\": rpc error: code = NotFound desc = could not find container \"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\": container with ID starting with c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.835828 4757 scope.go:117] "RemoveContainer" containerID="87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.836075 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323"} err="failed to get container status \"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\": rpc error: code = NotFound desc = could not find container \"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\": container with ID starting with 87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.836097 4757 scope.go:117] "RemoveContainer" containerID="e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.836446 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407"} err="failed to get container status \"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\": rpc error: code = NotFound desc = could not find container \"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\": container with ID starting with e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.836468 4757 scope.go:117] "RemoveContainer" containerID="4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.836720 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd"} err="failed to get container status \"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\": rpc error: code = NotFound desc = could not find container \"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\": container with ID starting with 4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.836740 4757 scope.go:117] "RemoveContainer" containerID="253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.836986 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa"} err="failed to get container status \"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\": rpc error: code = NotFound desc = could not find container \"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\": container with ID starting with 253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.837004 4757 scope.go:117] "RemoveContainer" containerID="30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.837224 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c"} err="failed to get container status \"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\": rpc error: code = NotFound desc = could not find container \"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\": container with ID starting with 30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.837245 4757 scope.go:117] "RemoveContainer" containerID="d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.837495 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe"} err="failed to get container status \"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe\": rpc error: code = NotFound desc = could not find container \"d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe\": container with ID starting with d8f1fe7366c069031f37d0a29139fb4337743c91ab48c424facec4be16ec0dfe not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.837523 4757 scope.go:117] "RemoveContainer" containerID="5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.837915 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee"} err="failed to get container status \"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\": rpc error: code = NotFound desc = could not find container \"5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee\": container with ID starting with 5a73673192cf23e7c455199af5dcb524e1b4359316041397859af047be67b9ee not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.837944 4757 scope.go:117] "RemoveContainer" containerID="845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.838209 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9"} err="failed to get container status \"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\": rpc error: code = NotFound desc = could not find container \"845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9\": container with ID starting with 845be2c1ca9277b774614969fb3e41085e6e35f6ce222a09a9d6a35dc06a00b9 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.838231 4757 scope.go:117] "RemoveContainer" containerID="bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.838480 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1"} err="failed to get container status \"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\": rpc error: code = NotFound desc = could not find container \"bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1\": container with ID starting with bf56b4ccc66baed7d0e7695ffe852a1b65b0071a2080b816e06759de721150c1 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.838498 4757 scope.go:117] "RemoveContainer" containerID="c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.838746 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02"} err="failed to get container status \"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\": rpc error: code = NotFound desc = could not find container \"c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02\": container with ID starting with c617ce677e55e83f335e52d6a9b014691cf22d62b6f5bd3d03b91e56df744f02 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.838767 4757 scope.go:117] "RemoveContainer" containerID="87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.839019 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323"} err="failed to get container status \"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\": rpc error: code = NotFound desc = could not find container \"87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323\": container with ID starting with 87d8a5a5bb9f0817f33a728407de0d8caea3ec9b64dff58f1f54bad5f71ef323 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.839038 4757 scope.go:117] "RemoveContainer" containerID="e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.839206 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407"} err="failed to get container status \"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\": rpc error: code = NotFound desc = could not find container \"e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407\": container with ID starting with e37cc6526c499c552ee7eae53982cccd0130185228e7b5e85da159a553d3a407 not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.839221 4757 scope.go:117] "RemoveContainer" containerID="4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.839466 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd"} err="failed to get container status \"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\": rpc error: code = NotFound desc = could not find container \"4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd\": container with ID starting with 4ca85b7d2ac7c0d57fd412d7254f859416220c0fb8b36cccf7ea556f30ce3ecd not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.839487 4757 scope.go:117] "RemoveContainer" containerID="253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.839669 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa"} err="failed to get container status \"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\": rpc error: code = NotFound desc = could not find container \"253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa\": container with ID starting with 253917206c82c118761a980eb9843e26705328388ed39d1a1af8f0e542d9d3fa not found: ID does not exist" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.839687 4757 scope.go:117] "RemoveContainer" containerID="30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c" Jan 29 15:25:57 crc kubenswrapper[4757]: I0129 15:25:57.839882 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c"} err="failed to get container status \"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\": rpc error: code = NotFound desc = could not find container \"30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c\": container with ID starting with 30389bc8dfdfaffdf3055eb7aea0e04b9d8fe2c9cd93e2175a2128af72d00d7c not found: ID does not exist" Jan 29 15:25:58 crc kubenswrapper[4757]: I0129 15:25:58.527557 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" event={"ID":"c9d72b0c-2179-402a-b204-33e764ab2f50","Type":"ContainerStarted","Data":"39526a3f88ceb760a94a3b5e6639883d389e6997177943aa494e3b8812b5be9d"} Jan 29 15:25:58 crc kubenswrapper[4757]: I0129 15:25:58.527857 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" event={"ID":"c9d72b0c-2179-402a-b204-33e764ab2f50","Type":"ContainerStarted","Data":"f0cb834d941a3122ce0e60354dd020c1e0dc882f199d99620542ddedd74d9e48"} Jan 29 15:25:58 crc kubenswrapper[4757]: I0129 15:25:58.527868 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" event={"ID":"c9d72b0c-2179-402a-b204-33e764ab2f50","Type":"ContainerStarted","Data":"abea06b4f73c94c47d07ca8c6f3cbdcc78e62e263c42a2e7f21dae3ee5e61477"} Jan 29 15:25:58 crc kubenswrapper[4757]: I0129 15:25:58.527879 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" event={"ID":"c9d72b0c-2179-402a-b204-33e764ab2f50","Type":"ContainerStarted","Data":"83ccdf2e21f8ad2688055b40077260f42121246000ce184bfde49a533adb02ad"} Jan 29 15:25:58 crc kubenswrapper[4757]: I0129 15:25:58.527888 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" event={"ID":"c9d72b0c-2179-402a-b204-33e764ab2f50","Type":"ContainerStarted","Data":"a69198e984937762923844e6c08f9d2a2ae22e0e9d5a968213a2fa06e22e15fe"} Jan 29 15:25:58 crc kubenswrapper[4757]: I0129 15:25:58.527896 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" event={"ID":"c9d72b0c-2179-402a-b204-33e764ab2f50","Type":"ContainerStarted","Data":"592ec24ded7cbf85a790fdfcf497a1edca1ed97025c0677572cf492b67915bb8"} Jan 29 15:25:58 crc kubenswrapper[4757]: I0129 15:25:58.529366 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bcbdt_fe6866d7-5a43-46d5-ba84-264847f9cd30/kube-multus/2.log" Jan 29 15:25:58 crc kubenswrapper[4757]: I0129 15:25:58.529445 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bcbdt" event={"ID":"fe6866d7-5a43-46d5-ba84-264847f9cd30","Type":"ContainerStarted","Data":"4d5a573ed56326fae8d47c32d4160b0691062de19dc69a182ef508f32e4ae141"} Jan 29 15:25:59 crc kubenswrapper[4757]: I0129 15:25:59.402243 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6815a1b-56eb-4075-84ae-1af5d0dcb742" path="/var/lib/kubelet/pods/e6815a1b-56eb-4075-84ae-1af5d0dcb742/volumes" Jan 29 15:26:00 crc kubenswrapper[4757]: I0129 15:26:00.542637 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" event={"ID":"c9d72b0c-2179-402a-b204-33e764ab2f50","Type":"ContainerStarted","Data":"1b7ffc450a0001bb014b4e64801284bb48eda1be3d16e1b0d0340842a07bbc3e"} Jan 29 15:26:03 crc kubenswrapper[4757]: I0129 15:26:03.565828 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" event={"ID":"c9d72b0c-2179-402a-b204-33e764ab2f50","Type":"ContainerStarted","Data":"5c7485452bf20e12aaa899ffd3e4402be5cd884a367fd8de978255af8b51163e"} Jan 29 15:26:03 crc kubenswrapper[4757]: I0129 15:26:03.567204 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:26:03 crc kubenswrapper[4757]: I0129 15:26:03.567307 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:26:03 crc kubenswrapper[4757]: I0129 15:26:03.567373 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:26:03 crc kubenswrapper[4757]: I0129 15:26:03.599550 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" podStartSLOduration=6.5995305250000005 podStartE2EDuration="6.599530525s" podCreationTimestamp="2026-01-29 15:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:26:03.594761165 +0000 UTC m=+926.884011422" watchObservedRunningTime="2026-01-29 15:26:03.599530525 +0000 UTC m=+926.888780762" Jan 29 15:26:03 crc kubenswrapper[4757]: I0129 15:26:03.601914 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:26:03 crc kubenswrapper[4757]: I0129 15:26:03.604470 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:26:17 crc kubenswrapper[4757]: I0129 15:26:17.604932 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:26:17 crc kubenswrapper[4757]: I0129 15:26:17.605847 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:26:27 crc kubenswrapper[4757]: I0129 15:26:27.376117 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bw6hd" Jan 29 15:26:39 crc kubenswrapper[4757]: I0129 15:26:39.818699 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz"] Jan 29 15:26:39 crc kubenswrapper[4757]: I0129 15:26:39.821432 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" Jan 29 15:26:39 crc kubenswrapper[4757]: I0129 15:26:39.823313 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 15:26:39 crc kubenswrapper[4757]: I0129 15:26:39.831775 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz"] Jan 29 15:26:39 crc kubenswrapper[4757]: I0129 15:26:39.912123 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz\" (UID: \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" Jan 29 15:26:39 crc kubenswrapper[4757]: I0129 15:26:39.912171 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz\" (UID: \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" Jan 29 15:26:39 crc kubenswrapper[4757]: I0129 15:26:39.912219 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46bz8\" (UniqueName: \"kubernetes.io/projected/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-kube-api-access-46bz8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz\" (UID: \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" Jan 29 15:26:40 crc kubenswrapper[4757]: I0129 15:26:40.013331 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46bz8\" (UniqueName: \"kubernetes.io/projected/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-kube-api-access-46bz8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz\" (UID: \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" Jan 29 15:26:40 crc kubenswrapper[4757]: I0129 15:26:40.013426 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz\" (UID: \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" Jan 29 15:26:40 crc kubenswrapper[4757]: I0129 15:26:40.013448 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz\" (UID: \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" Jan 29 15:26:40 crc kubenswrapper[4757]: I0129 15:26:40.013834 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz\" (UID: \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" Jan 29 15:26:40 crc kubenswrapper[4757]: I0129 15:26:40.014066 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz\" (UID: \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" Jan 29 15:26:40 crc kubenswrapper[4757]: I0129 15:26:40.031399 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46bz8\" (UniqueName: \"kubernetes.io/projected/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-kube-api-access-46bz8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz\" (UID: \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" Jan 29 15:26:40 crc kubenswrapper[4757]: I0129 15:26:40.134711 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" Jan 29 15:26:40 crc kubenswrapper[4757]: I0129 15:26:40.340816 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz"] Jan 29 15:26:40 crc kubenswrapper[4757]: W0129 15:26:40.346212 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a6f611a_8f0d_45a7_a1f4_75cb85eb65a2.slice/crio-4f6511d7f0e96222380ad90f36ab905fca5b0f8d75e48504b6d9aee686b099c5 WatchSource:0}: Error finding container 4f6511d7f0e96222380ad90f36ab905fca5b0f8d75e48504b6d9aee686b099c5: Status 404 returned error can't find the container with id 4f6511d7f0e96222380ad90f36ab905fca5b0f8d75e48504b6d9aee686b099c5 Jan 29 15:26:40 crc kubenswrapper[4757]: I0129 15:26:40.765190 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" event={"ID":"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2","Type":"ContainerStarted","Data":"26b341a998019aa3f2b4230c7db8d2fd1fe94df448ccdb603d151547675ca977"} Jan 29 15:26:40 crc kubenswrapper[4757]: I0129 15:26:40.765256 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" event={"ID":"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2","Type":"ContainerStarted","Data":"4f6511d7f0e96222380ad90f36ab905fca5b0f8d75e48504b6d9aee686b099c5"} Jan 29 15:26:41 crc kubenswrapper[4757]: I0129 15:26:41.776678 4757 generic.go:334] "Generic (PLEG): container finished" podID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" containerID="26b341a998019aa3f2b4230c7db8d2fd1fe94df448ccdb603d151547675ca977" exitCode=0 Jan 29 15:26:41 crc kubenswrapper[4757]: I0129 15:26:41.776733 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" event={"ID":"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2","Type":"ContainerDied","Data":"26b341a998019aa3f2b4230c7db8d2fd1fe94df448ccdb603d151547675ca977"} Jan 29 15:26:41 crc kubenswrapper[4757]: E0129 15:26:41.911095 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14" Jan 29 15:26:41 crc kubenswrapper[4757]: E0129 15:26:41.911579 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46bz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz_openshift-marketplace(3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:26:41 crc kubenswrapper[4757]: E0129 15:26:41.912855 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" podUID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.186589 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rcpx8"] Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.189339 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.242142 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rcpx8"] Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.244409 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65vvp\" (UniqueName: \"kubernetes.io/projected/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-kube-api-access-65vvp\") pod \"redhat-operators-rcpx8\" (UID: \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\") " pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.244454 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-utilities\") pod \"redhat-operators-rcpx8\" (UID: \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\") " pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.244503 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-catalog-content\") pod \"redhat-operators-rcpx8\" (UID: \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\") " pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.346029 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65vvp\" (UniqueName: \"kubernetes.io/projected/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-kube-api-access-65vvp\") pod \"redhat-operators-rcpx8\" (UID: \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\") " pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.346086 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-utilities\") pod \"redhat-operators-rcpx8\" (UID: \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\") " pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.346142 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-catalog-content\") pod \"redhat-operators-rcpx8\" (UID: \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\") " pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.346744 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-catalog-content\") pod \"redhat-operators-rcpx8\" (UID: \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\") " pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.346818 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-utilities\") pod \"redhat-operators-rcpx8\" (UID: \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\") " pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.364126 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65vvp\" (UniqueName: \"kubernetes.io/projected/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-kube-api-access-65vvp\") pod \"redhat-operators-rcpx8\" (UID: \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\") " pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.519449 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.713745 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rcpx8"] Jan 29 15:26:42 crc kubenswrapper[4757]: W0129 15:26:42.720156 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5bee272_ecb5_4516_a079_c2d83bb7ffc5.slice/crio-ac8ee5c03cc73936cd35ec56b5f9fd6cbc69e351ca0f2ee8638f3c30421e982c WatchSource:0}: Error finding container ac8ee5c03cc73936cd35ec56b5f9fd6cbc69e351ca0f2ee8638f3c30421e982c: Status 404 returned error can't find the container with id ac8ee5c03cc73936cd35ec56b5f9fd6cbc69e351ca0f2ee8638f3c30421e982c Jan 29 15:26:42 crc kubenswrapper[4757]: I0129 15:26:42.783506 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcpx8" event={"ID":"f5bee272-ecb5-4516-a079-c2d83bb7ffc5","Type":"ContainerStarted","Data":"ac8ee5c03cc73936cd35ec56b5f9fd6cbc69e351ca0f2ee8638f3c30421e982c"} Jan 29 15:26:42 crc kubenswrapper[4757]: E0129 15:26:42.784695 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" podUID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" Jan 29 15:26:43 crc kubenswrapper[4757]: I0129 15:26:43.790489 4757 generic.go:334] "Generic (PLEG): container finished" podID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" containerID="009b6dfa7353bd9b1d628fe080a809693b5ab2402fe04bbf7a3c42dcd2b7a995" exitCode=0 Jan 29 15:26:43 crc kubenswrapper[4757]: I0129 15:26:43.790539 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcpx8" event={"ID":"f5bee272-ecb5-4516-a079-c2d83bb7ffc5","Type":"ContainerDied","Data":"009b6dfa7353bd9b1d628fe080a809693b5ab2402fe04bbf7a3c42dcd2b7a995"} Jan 29 15:26:43 crc kubenswrapper[4757]: E0129 15:26:43.908857 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:26:43 crc kubenswrapper[4757]: E0129 15:26:43.909256 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65vvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rcpx8_openshift-marketplace(f5bee272-ecb5-4516-a079-c2d83bb7ffc5): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:26:43 crc kubenswrapper[4757]: E0129 15:26:43.910478 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-rcpx8" podUID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" Jan 29 15:26:44 crc kubenswrapper[4757]: E0129 15:26:44.797724 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-rcpx8" podUID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" Jan 29 15:26:47 crc kubenswrapper[4757]: I0129 15:26:47.604457 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:26:47 crc kubenswrapper[4757]: I0129 15:26:47.604720 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:26:58 crc kubenswrapper[4757]: E0129 15:26:58.520045 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14" Jan 29 15:26:58 crc kubenswrapper[4757]: E0129 15:26:58.520525 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46bz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz_openshift-marketplace(3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:26:58 crc kubenswrapper[4757]: E0129 15:26:58.521732 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" podUID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" Jan 29 15:26:58 crc kubenswrapper[4757]: E0129 15:26:58.534914 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:26:58 crc kubenswrapper[4757]: E0129 15:26:58.535047 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65vvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rcpx8_openshift-marketplace(f5bee272-ecb5-4516-a079-c2d83bb7ffc5): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:26:58 crc kubenswrapper[4757]: E0129 15:26:58.536357 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-rcpx8" podUID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" Jan 29 15:27:09 crc kubenswrapper[4757]: E0129 15:27:09.400620 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" podUID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" Jan 29 15:27:12 crc kubenswrapper[4757]: E0129 15:27:12.398217 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-rcpx8" podUID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" Jan 29 15:27:17 crc kubenswrapper[4757]: I0129 15:27:17.605121 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:27:17 crc kubenswrapper[4757]: I0129 15:27:17.605510 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:27:17 crc kubenswrapper[4757]: I0129 15:27:17.605588 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:27:17 crc kubenswrapper[4757]: I0129 15:27:17.606381 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42c07bced4a4cc16b1156866e5aecd360caa654a1d5d2eaf998a21db73871643"} pod="openshift-machine-config-operator/machine-config-daemon-45q8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:27:17 crc kubenswrapper[4757]: I0129 15:27:17.606764 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" containerID="cri-o://42c07bced4a4cc16b1156866e5aecd360caa654a1d5d2eaf998a21db73871643" gracePeriod=600 Jan 29 15:27:17 crc kubenswrapper[4757]: I0129 15:27:17.978780 4757 generic.go:334] "Generic (PLEG): container finished" podID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerID="42c07bced4a4cc16b1156866e5aecd360caa654a1d5d2eaf998a21db73871643" exitCode=0 Jan 29 15:27:17 crc kubenswrapper[4757]: I0129 15:27:17.978843 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerDied","Data":"42c07bced4a4cc16b1156866e5aecd360caa654a1d5d2eaf998a21db73871643"} Jan 29 15:27:17 crc kubenswrapper[4757]: I0129 15:27:17.978905 4757 scope.go:117] "RemoveContainer" containerID="174440c846854cb49768c9b08f3011bcfb796de0989f3816b5db8245b48df983" Jan 29 15:27:18 crc kubenswrapper[4757]: I0129 15:27:18.996781 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"26224c213349170284329c384d8b105e9ad831590acee9b01765c926f542d25f"} Jan 29 15:27:24 crc kubenswrapper[4757]: I0129 15:27:24.031018 4757 generic.go:334] "Generic (PLEG): container finished" podID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" containerID="6289d541bd031298c92d56ad56b26ba7c1900f561a9ecd0829e37820c5a5ddf9" exitCode=0 Jan 29 15:27:24 crc kubenswrapper[4757]: I0129 15:27:24.031143 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" event={"ID":"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2","Type":"ContainerDied","Data":"6289d541bd031298c92d56ad56b26ba7c1900f561a9ecd0829e37820c5a5ddf9"} Jan 29 15:27:25 crc kubenswrapper[4757]: I0129 15:27:25.039392 4757 generic.go:334] "Generic (PLEG): container finished" podID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" containerID="891a9e019da8f9dfb22cc05b3fecd1a556072d247eba643b7c94b70379fc650a" exitCode=0 Jan 29 15:27:25 crc kubenswrapper[4757]: I0129 15:27:25.040208 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" event={"ID":"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2","Type":"ContainerDied","Data":"891a9e019da8f9dfb22cc05b3fecd1a556072d247eba643b7c94b70379fc650a"} Jan 29 15:27:26 crc kubenswrapper[4757]: I0129 15:27:26.049881 4757 generic.go:334] "Generic (PLEG): container finished" podID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" containerID="ff3b1b039fea10471e3b9611702dadd1b4100876768df9262d2a6c01d917279b" exitCode=0 Jan 29 15:27:26 crc kubenswrapper[4757]: I0129 15:27:26.049981 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcpx8" event={"ID":"f5bee272-ecb5-4516-a079-c2d83bb7ffc5","Type":"ContainerDied","Data":"ff3b1b039fea10471e3b9611702dadd1b4100876768df9262d2a6c01d917279b"} Jan 29 15:27:26 crc kubenswrapper[4757]: I0129 15:27:26.286290 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" Jan 29 15:27:26 crc kubenswrapper[4757]: I0129 15:27:26.444882 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46bz8\" (UniqueName: \"kubernetes.io/projected/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-kube-api-access-46bz8\") pod \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\" (UID: \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\") " Jan 29 15:27:26 crc kubenswrapper[4757]: I0129 15:27:26.445227 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-bundle\") pod \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\" (UID: \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\") " Jan 29 15:27:26 crc kubenswrapper[4757]: I0129 15:27:26.445330 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-util\") pod \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\" (UID: \"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2\") " Jan 29 15:27:26 crc kubenswrapper[4757]: I0129 15:27:26.445876 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-bundle" (OuterVolumeSpecName: "bundle") pod "3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" (UID: "3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:26 crc kubenswrapper[4757]: I0129 15:27:26.450448 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-kube-api-access-46bz8" (OuterVolumeSpecName: "kube-api-access-46bz8") pod "3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" (UID: "3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2"). InnerVolumeSpecName "kube-api-access-46bz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:26 crc kubenswrapper[4757]: I0129 15:27:26.461550 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-util" (OuterVolumeSpecName: "util") pod "3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" (UID: "3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:26 crc kubenswrapper[4757]: I0129 15:27:26.546043 4757 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:26 crc kubenswrapper[4757]: I0129 15:27:26.546073 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46bz8\" (UniqueName: \"kubernetes.io/projected/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-kube-api-access-46bz8\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:26 crc kubenswrapper[4757]: I0129 15:27:26.546085 4757 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:27 crc kubenswrapper[4757]: I0129 15:27:27.060426 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" event={"ID":"3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2","Type":"ContainerDied","Data":"4f6511d7f0e96222380ad90f36ab905fca5b0f8d75e48504b6d9aee686b099c5"} Jan 29 15:27:27 crc kubenswrapper[4757]: I0129 15:27:27.060465 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f6511d7f0e96222380ad90f36ab905fca5b0f8d75e48504b6d9aee686b099c5" Jan 29 15:27:27 crc kubenswrapper[4757]: I0129 15:27:27.060444 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz" Jan 29 15:27:27 crc kubenswrapper[4757]: I0129 15:27:27.063833 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcpx8" event={"ID":"f5bee272-ecb5-4516-a079-c2d83bb7ffc5","Type":"ContainerStarted","Data":"a80876ef34a1004ab412fcd0b55eee194b54b6faa9a518c28fa24d0f3e56b131"} Jan 29 15:27:27 crc kubenswrapper[4757]: I0129 15:27:27.085605 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rcpx8" podStartSLOduration=2.35899553 podStartE2EDuration="45.085584846s" podCreationTimestamp="2026-01-29 15:26:42 +0000 UTC" firstStartedPulling="2026-01-29 15:26:43.792866877 +0000 UTC m=+967.082117124" lastFinishedPulling="2026-01-29 15:27:26.519456203 +0000 UTC m=+1009.808706440" observedRunningTime="2026-01-29 15:27:27.082902048 +0000 UTC m=+1010.372152315" watchObservedRunningTime="2026-01-29 15:27:27.085584846 +0000 UTC m=+1010.374835093" Jan 29 15:27:31 crc kubenswrapper[4757]: I0129 15:27:31.702165 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l49b9"] Jan 29 15:27:31 crc kubenswrapper[4757]: E0129 15:27:31.702666 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" containerName="pull" Jan 29 15:27:31 crc kubenswrapper[4757]: I0129 15:27:31.702678 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" containerName="pull" Jan 29 15:27:31 crc kubenswrapper[4757]: E0129 15:27:31.702690 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" containerName="util" Jan 29 15:27:31 crc kubenswrapper[4757]: I0129 15:27:31.702696 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" containerName="util" Jan 29 15:27:31 crc kubenswrapper[4757]: E0129 15:27:31.702705 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" containerName="extract" Jan 29 15:27:31 crc kubenswrapper[4757]: I0129 15:27:31.702711 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" containerName="extract" Jan 29 15:27:31 crc kubenswrapper[4757]: I0129 15:27:31.702803 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2" containerName="extract" Jan 29 15:27:31 crc kubenswrapper[4757]: I0129 15:27:31.703174 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-l49b9" Jan 29 15:27:31 crc kubenswrapper[4757]: I0129 15:27:31.705222 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 15:27:31 crc kubenswrapper[4757]: I0129 15:27:31.706970 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-mxj6h" Jan 29 15:27:31 crc kubenswrapper[4757]: I0129 15:27:31.707411 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 15:27:31 crc kubenswrapper[4757]: I0129 15:27:31.729819 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l49b9"] Jan 29 15:27:31 crc kubenswrapper[4757]: I0129 15:27:31.809905 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clsts\" (UniqueName: \"kubernetes.io/projected/0c80f85e-cab4-4177-800e-0fb5f301c838-kube-api-access-clsts\") pod \"nmstate-operator-646758c888-l49b9\" (UID: \"0c80f85e-cab4-4177-800e-0fb5f301c838\") " pod="openshift-nmstate/nmstate-operator-646758c888-l49b9" Jan 29 15:27:31 crc kubenswrapper[4757]: I0129 15:27:31.911121 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clsts\" (UniqueName: \"kubernetes.io/projected/0c80f85e-cab4-4177-800e-0fb5f301c838-kube-api-access-clsts\") pod \"nmstate-operator-646758c888-l49b9\" (UID: \"0c80f85e-cab4-4177-800e-0fb5f301c838\") " pod="openshift-nmstate/nmstate-operator-646758c888-l49b9" Jan 29 15:27:31 crc kubenswrapper[4757]: I0129 15:27:31.935432 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clsts\" (UniqueName: \"kubernetes.io/projected/0c80f85e-cab4-4177-800e-0fb5f301c838-kube-api-access-clsts\") pod \"nmstate-operator-646758c888-l49b9\" (UID: \"0c80f85e-cab4-4177-800e-0fb5f301c838\") " pod="openshift-nmstate/nmstate-operator-646758c888-l49b9" Jan 29 15:27:32 crc kubenswrapper[4757]: I0129 15:27:32.018635 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-l49b9" Jan 29 15:27:32 crc kubenswrapper[4757]: I0129 15:27:32.256034 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-l49b9"] Jan 29 15:27:32 crc kubenswrapper[4757]: I0129 15:27:32.520133 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:27:32 crc kubenswrapper[4757]: I0129 15:27:32.520502 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:27:32 crc kubenswrapper[4757]: I0129 15:27:32.584341 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:27:33 crc kubenswrapper[4757]: I0129 15:27:33.109505 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-l49b9" event={"ID":"0c80f85e-cab4-4177-800e-0fb5f301c838","Type":"ContainerStarted","Data":"4ff50eb5efb67eab105eeb751d3c92760cc458a0114781ff48cb37b5fb038bc8"} Jan 29 15:27:33 crc kubenswrapper[4757]: I0129 15:27:33.144145 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:27:35 crc kubenswrapper[4757]: I0129 15:27:35.121111 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-l49b9" event={"ID":"0c80f85e-cab4-4177-800e-0fb5f301c838","Type":"ContainerStarted","Data":"02eab0667f407db536585c2ef0ea38de0abc3ca0c20c638840a81a117bb60032"} Jan 29 15:27:35 crc kubenswrapper[4757]: I0129 15:27:35.152304 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-l49b9" podStartSLOduration=2.246919646 podStartE2EDuration="4.152239155s" podCreationTimestamp="2026-01-29 15:27:31 +0000 UTC" firstStartedPulling="2026-01-29 15:27:32.269380065 +0000 UTC m=+1015.558630302" lastFinishedPulling="2026-01-29 15:27:34.174699534 +0000 UTC m=+1017.463949811" observedRunningTime="2026-01-29 15:27:35.145594272 +0000 UTC m=+1018.434844509" watchObservedRunningTime="2026-01-29 15:27:35.152239155 +0000 UTC m=+1018.441489432" Jan 29 15:27:35 crc kubenswrapper[4757]: I0129 15:27:35.252163 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rcpx8"] Jan 29 15:27:35 crc kubenswrapper[4757]: I0129 15:27:35.252377 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rcpx8" podUID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" containerName="registry-server" containerID="cri-o://a80876ef34a1004ab412fcd0b55eee194b54b6faa9a518c28fa24d0f3e56b131" gracePeriod=2 Jan 29 15:27:35 crc kubenswrapper[4757]: I0129 15:27:35.648302 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:27:35 crc kubenswrapper[4757]: I0129 15:27:35.679871 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-utilities\") pod \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\" (UID: \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\") " Jan 29 15:27:35 crc kubenswrapper[4757]: I0129 15:27:35.679954 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65vvp\" (UniqueName: \"kubernetes.io/projected/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-kube-api-access-65vvp\") pod \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\" (UID: \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\") " Jan 29 15:27:35 crc kubenswrapper[4757]: I0129 15:27:35.680000 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-catalog-content\") pod \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\" (UID: \"f5bee272-ecb5-4516-a079-c2d83bb7ffc5\") " Jan 29 15:27:35 crc kubenswrapper[4757]: I0129 15:27:35.681173 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-utilities" (OuterVolumeSpecName: "utilities") pod "f5bee272-ecb5-4516-a079-c2d83bb7ffc5" (UID: "f5bee272-ecb5-4516-a079-c2d83bb7ffc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:35 crc kubenswrapper[4757]: I0129 15:27:35.705467 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-kube-api-access-65vvp" (OuterVolumeSpecName: "kube-api-access-65vvp") pod "f5bee272-ecb5-4516-a079-c2d83bb7ffc5" (UID: "f5bee272-ecb5-4516-a079-c2d83bb7ffc5"). InnerVolumeSpecName "kube-api-access-65vvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:35 crc kubenswrapper[4757]: I0129 15:27:35.781524 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:35 crc kubenswrapper[4757]: I0129 15:27:35.781559 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65vvp\" (UniqueName: \"kubernetes.io/projected/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-kube-api-access-65vvp\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:36 crc kubenswrapper[4757]: I0129 15:27:36.129210 4757 generic.go:334] "Generic (PLEG): container finished" podID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" containerID="a80876ef34a1004ab412fcd0b55eee194b54b6faa9a518c28fa24d0f3e56b131" exitCode=0 Jan 29 15:27:36 crc kubenswrapper[4757]: I0129 15:27:36.129254 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcpx8" event={"ID":"f5bee272-ecb5-4516-a079-c2d83bb7ffc5","Type":"ContainerDied","Data":"a80876ef34a1004ab412fcd0b55eee194b54b6faa9a518c28fa24d0f3e56b131"} Jan 29 15:27:36 crc kubenswrapper[4757]: I0129 15:27:36.129317 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rcpx8" Jan 29 15:27:36 crc kubenswrapper[4757]: I0129 15:27:36.129383 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rcpx8" event={"ID":"f5bee272-ecb5-4516-a079-c2d83bb7ffc5","Type":"ContainerDied","Data":"ac8ee5c03cc73936cd35ec56b5f9fd6cbc69e351ca0f2ee8638f3c30421e982c"} Jan 29 15:27:36 crc kubenswrapper[4757]: I0129 15:27:36.129413 4757 scope.go:117] "RemoveContainer" containerID="a80876ef34a1004ab412fcd0b55eee194b54b6faa9a518c28fa24d0f3e56b131" Jan 29 15:27:36 crc kubenswrapper[4757]: I0129 15:27:36.145409 4757 scope.go:117] "RemoveContainer" containerID="ff3b1b039fea10471e3b9611702dadd1b4100876768df9262d2a6c01d917279b" Jan 29 15:27:36 crc kubenswrapper[4757]: I0129 15:27:36.159484 4757 scope.go:117] "RemoveContainer" containerID="009b6dfa7353bd9b1d628fe080a809693b5ab2402fe04bbf7a3c42dcd2b7a995" Jan 29 15:27:36 crc kubenswrapper[4757]: I0129 15:27:36.183361 4757 scope.go:117] "RemoveContainer" containerID="a80876ef34a1004ab412fcd0b55eee194b54b6faa9a518c28fa24d0f3e56b131" Jan 29 15:27:36 crc kubenswrapper[4757]: E0129 15:27:36.183830 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a80876ef34a1004ab412fcd0b55eee194b54b6faa9a518c28fa24d0f3e56b131\": container with ID starting with a80876ef34a1004ab412fcd0b55eee194b54b6faa9a518c28fa24d0f3e56b131 not found: ID does not exist" containerID="a80876ef34a1004ab412fcd0b55eee194b54b6faa9a518c28fa24d0f3e56b131" Jan 29 15:27:36 crc kubenswrapper[4757]: I0129 15:27:36.184498 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a80876ef34a1004ab412fcd0b55eee194b54b6faa9a518c28fa24d0f3e56b131"} err="failed to get container status \"a80876ef34a1004ab412fcd0b55eee194b54b6faa9a518c28fa24d0f3e56b131\": rpc error: code = NotFound desc = could not find container \"a80876ef34a1004ab412fcd0b55eee194b54b6faa9a518c28fa24d0f3e56b131\": container with ID starting with a80876ef34a1004ab412fcd0b55eee194b54b6faa9a518c28fa24d0f3e56b131 not found: ID does not exist" Jan 29 15:27:36 crc kubenswrapper[4757]: I0129 15:27:36.184546 4757 scope.go:117] "RemoveContainer" containerID="ff3b1b039fea10471e3b9611702dadd1b4100876768df9262d2a6c01d917279b" Jan 29 15:27:36 crc kubenswrapper[4757]: E0129 15:27:36.184895 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff3b1b039fea10471e3b9611702dadd1b4100876768df9262d2a6c01d917279b\": container with ID starting with ff3b1b039fea10471e3b9611702dadd1b4100876768df9262d2a6c01d917279b not found: ID does not exist" containerID="ff3b1b039fea10471e3b9611702dadd1b4100876768df9262d2a6c01d917279b" Jan 29 15:27:36 crc kubenswrapper[4757]: I0129 15:27:36.184938 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3b1b039fea10471e3b9611702dadd1b4100876768df9262d2a6c01d917279b"} err="failed to get container status \"ff3b1b039fea10471e3b9611702dadd1b4100876768df9262d2a6c01d917279b\": rpc error: code = NotFound desc = could not find container \"ff3b1b039fea10471e3b9611702dadd1b4100876768df9262d2a6c01d917279b\": container with ID starting with ff3b1b039fea10471e3b9611702dadd1b4100876768df9262d2a6c01d917279b not found: ID does not exist" Jan 29 15:27:36 crc kubenswrapper[4757]: I0129 15:27:36.184964 4757 scope.go:117] "RemoveContainer" containerID="009b6dfa7353bd9b1d628fe080a809693b5ab2402fe04bbf7a3c42dcd2b7a995" Jan 29 15:27:36 crc kubenswrapper[4757]: E0129 15:27:36.185449 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"009b6dfa7353bd9b1d628fe080a809693b5ab2402fe04bbf7a3c42dcd2b7a995\": container with ID starting with 009b6dfa7353bd9b1d628fe080a809693b5ab2402fe04bbf7a3c42dcd2b7a995 not found: ID does not exist" containerID="009b6dfa7353bd9b1d628fe080a809693b5ab2402fe04bbf7a3c42dcd2b7a995" Jan 29 15:27:36 crc kubenswrapper[4757]: I0129 15:27:36.185488 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"009b6dfa7353bd9b1d628fe080a809693b5ab2402fe04bbf7a3c42dcd2b7a995"} err="failed to get container status \"009b6dfa7353bd9b1d628fe080a809693b5ab2402fe04bbf7a3c42dcd2b7a995\": rpc error: code = NotFound desc = could not find container \"009b6dfa7353bd9b1d628fe080a809693b5ab2402fe04bbf7a3c42dcd2b7a995\": container with ID starting with 009b6dfa7353bd9b1d628fe080a809693b5ab2402fe04bbf7a3c42dcd2b7a995 not found: ID does not exist" Jan 29 15:27:37 crc kubenswrapper[4757]: I0129 15:27:37.253339 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5bee272-ecb5-4516-a079-c2d83bb7ffc5" (UID: "f5bee272-ecb5-4516-a079-c2d83bb7ffc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:37 crc kubenswrapper[4757]: I0129 15:27:37.308408 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5bee272-ecb5-4516-a079-c2d83bb7ffc5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:37 crc kubenswrapper[4757]: I0129 15:27:37.380877 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rcpx8"] Jan 29 15:27:37 crc kubenswrapper[4757]: I0129 15:27:37.391492 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rcpx8"] Jan 29 15:27:37 crc kubenswrapper[4757]: I0129 15:27:37.414623 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" path="/var/lib/kubelet/pods/f5bee272-ecb5-4516-a079-c2d83bb7ffc5/volumes" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.851805 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nzjdc"] Jan 29 15:27:40 crc kubenswrapper[4757]: E0129 15:27:40.853324 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" containerName="extract-utilities" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.853430 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" containerName="extract-utilities" Jan 29 15:27:40 crc kubenswrapper[4757]: E0129 15:27:40.853538 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" containerName="extract-content" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.853626 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" containerName="extract-content" Jan 29 15:27:40 crc kubenswrapper[4757]: E0129 15:27:40.853703 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" containerName="registry-server" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.853771 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" containerName="registry-server" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.853979 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5bee272-ecb5-4516-a079-c2d83bb7ffc5" containerName="registry-server" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.854798 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-nzjdc" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.857615 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-g4v5x" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.864651 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nzjdc"] Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.870872 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx"] Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.871720 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.873958 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.890540 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx"] Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.944738 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-c8zb9"] Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.945786 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.951861 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a192d652-3d56-4191-908d-6f0241a07573-nmstate-lock\") pod \"nmstate-handler-c8zb9\" (UID: \"a192d652-3d56-4191-908d-6f0241a07573\") " pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.952116 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a192d652-3d56-4191-908d-6f0241a07573-ovs-socket\") pod \"nmstate-handler-c8zb9\" (UID: \"a192d652-3d56-4191-908d-6f0241a07573\") " pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.952535 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z9zl\" (UniqueName: \"kubernetes.io/projected/a192d652-3d56-4191-908d-6f0241a07573-kube-api-access-4z9zl\") pod \"nmstate-handler-c8zb9\" (UID: \"a192d652-3d56-4191-908d-6f0241a07573\") " pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.952678 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsl6n\" (UniqueName: \"kubernetes.io/projected/6877b102-1cc7-4306-93db-567d7f162a2a-kube-api-access-bsl6n\") pod \"nmstate-webhook-8474b5b9d8-kgkvx\" (UID: \"6877b102-1cc7-4306-93db-567d7f162a2a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.952771 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6877b102-1cc7-4306-93db-567d7f162a2a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-kgkvx\" (UID: \"6877b102-1cc7-4306-93db-567d7f162a2a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.952862 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a192d652-3d56-4191-908d-6f0241a07573-dbus-socket\") pod \"nmstate-handler-c8zb9\" (UID: \"a192d652-3d56-4191-908d-6f0241a07573\") " pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:40 crc kubenswrapper[4757]: I0129 15:27:40.952969 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cppls\" (UniqueName: \"kubernetes.io/projected/27377a37-8829-4efd-9df9-4804bc4689fc-kube-api-access-cppls\") pod \"nmstate-metrics-54757c584b-nzjdc\" (UID: \"27377a37-8829-4efd-9df9-4804bc4689fc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nzjdc" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.043490 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn"] Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.044300 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.047349 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.049263 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.049603 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-wh494" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.054361 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a192d652-3d56-4191-908d-6f0241a07573-dbus-socket\") pod \"nmstate-handler-c8zb9\" (UID: \"a192d652-3d56-4191-908d-6f0241a07573\") " pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.054776 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cppls\" (UniqueName: \"kubernetes.io/projected/27377a37-8829-4efd-9df9-4804bc4689fc-kube-api-access-cppls\") pod \"nmstate-metrics-54757c584b-nzjdc\" (UID: \"27377a37-8829-4efd-9df9-4804bc4689fc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nzjdc" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.055068 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7b21031-5a8e-4894-b583-c98cfd281944-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dbhgn\" (UID: \"d7b21031-5a8e-4894-b583-c98cfd281944\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.055176 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a192d652-3d56-4191-908d-6f0241a07573-nmstate-lock\") pod \"nmstate-handler-c8zb9\" (UID: \"a192d652-3d56-4191-908d-6f0241a07573\") " pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.055296 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a192d652-3d56-4191-908d-6f0241a07573-ovs-socket\") pod \"nmstate-handler-c8zb9\" (UID: \"a192d652-3d56-4191-908d-6f0241a07573\") " pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.055425 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p796w\" (UniqueName: \"kubernetes.io/projected/d7b21031-5a8e-4894-b583-c98cfd281944-kube-api-access-p796w\") pod \"nmstate-console-plugin-7754f76f8b-dbhgn\" (UID: \"d7b21031-5a8e-4894-b583-c98cfd281944\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.054727 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a192d652-3d56-4191-908d-6f0241a07573-dbus-socket\") pod \"nmstate-handler-c8zb9\" (UID: \"a192d652-3d56-4191-908d-6f0241a07573\") " pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.055395 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a192d652-3d56-4191-908d-6f0241a07573-ovs-socket\") pod \"nmstate-handler-c8zb9\" (UID: \"a192d652-3d56-4191-908d-6f0241a07573\") " pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.055258 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a192d652-3d56-4191-908d-6f0241a07573-nmstate-lock\") pod \"nmstate-handler-c8zb9\" (UID: \"a192d652-3d56-4191-908d-6f0241a07573\") " pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.055519 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z9zl\" (UniqueName: \"kubernetes.io/projected/a192d652-3d56-4191-908d-6f0241a07573-kube-api-access-4z9zl\") pod \"nmstate-handler-c8zb9\" (UID: \"a192d652-3d56-4191-908d-6f0241a07573\") " pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.055711 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsl6n\" (UniqueName: \"kubernetes.io/projected/6877b102-1cc7-4306-93db-567d7f162a2a-kube-api-access-bsl6n\") pod \"nmstate-webhook-8474b5b9d8-kgkvx\" (UID: \"6877b102-1cc7-4306-93db-567d7f162a2a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.055775 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6877b102-1cc7-4306-93db-567d7f162a2a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-kgkvx\" (UID: \"6877b102-1cc7-4306-93db-567d7f162a2a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.055855 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d7b21031-5a8e-4894-b583-c98cfd281944-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-dbhgn\" (UID: \"d7b21031-5a8e-4894-b583-c98cfd281944\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.061392 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6877b102-1cc7-4306-93db-567d7f162a2a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-kgkvx\" (UID: \"6877b102-1cc7-4306-93db-567d7f162a2a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.070867 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn"] Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.087839 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsl6n\" (UniqueName: \"kubernetes.io/projected/6877b102-1cc7-4306-93db-567d7f162a2a-kube-api-access-bsl6n\") pod \"nmstate-webhook-8474b5b9d8-kgkvx\" (UID: \"6877b102-1cc7-4306-93db-567d7f162a2a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.097992 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z9zl\" (UniqueName: \"kubernetes.io/projected/a192d652-3d56-4191-908d-6f0241a07573-kube-api-access-4z9zl\") pod \"nmstate-handler-c8zb9\" (UID: \"a192d652-3d56-4191-908d-6f0241a07573\") " pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.106828 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cppls\" (UniqueName: \"kubernetes.io/projected/27377a37-8829-4efd-9df9-4804bc4689fc-kube-api-access-cppls\") pod \"nmstate-metrics-54757c584b-nzjdc\" (UID: \"27377a37-8829-4efd-9df9-4804bc4689fc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-nzjdc" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.156541 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7b21031-5a8e-4894-b583-c98cfd281944-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dbhgn\" (UID: \"d7b21031-5a8e-4894-b583-c98cfd281944\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.156593 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p796w\" (UniqueName: \"kubernetes.io/projected/d7b21031-5a8e-4894-b583-c98cfd281944-kube-api-access-p796w\") pod \"nmstate-console-plugin-7754f76f8b-dbhgn\" (UID: \"d7b21031-5a8e-4894-b583-c98cfd281944\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.156631 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d7b21031-5a8e-4894-b583-c98cfd281944-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-dbhgn\" (UID: \"d7b21031-5a8e-4894-b583-c98cfd281944\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" Jan 29 15:27:41 crc kubenswrapper[4757]: E0129 15:27:41.156712 4757 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 29 15:27:41 crc kubenswrapper[4757]: E0129 15:27:41.156777 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7b21031-5a8e-4894-b583-c98cfd281944-plugin-serving-cert podName:d7b21031-5a8e-4894-b583-c98cfd281944 nodeName:}" failed. No retries permitted until 2026-01-29 15:27:41.656759656 +0000 UTC m=+1024.946009893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/d7b21031-5a8e-4894-b583-c98cfd281944-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-dbhgn" (UID: "d7b21031-5a8e-4894-b583-c98cfd281944") : secret "plugin-serving-cert" not found Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.157494 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d7b21031-5a8e-4894-b583-c98cfd281944-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-dbhgn\" (UID: \"d7b21031-5a8e-4894-b583-c98cfd281944\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.172529 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-nzjdc" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.187014 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p796w\" (UniqueName: \"kubernetes.io/projected/d7b21031-5a8e-4894-b583-c98cfd281944-kube-api-access-p796w\") pod \"nmstate-console-plugin-7754f76f8b-dbhgn\" (UID: \"d7b21031-5a8e-4894-b583-c98cfd281944\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.190019 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.250066 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-66c7f96c54-2q6xg"] Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.251160 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.264692 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.328848 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66c7f96c54-2q6xg"] Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.358279 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/53d101cf-3a50-49d9-9257-055e0223756a-console-serving-cert\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.358345 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53d101cf-3a50-49d9-9257-055e0223756a-trusted-ca-bundle\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.358382 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/53d101cf-3a50-49d9-9257-055e0223756a-oauth-serving-cert\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.358398 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/53d101cf-3a50-49d9-9257-055e0223756a-console-config\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.358434 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv4qj\" (UniqueName: \"kubernetes.io/projected/53d101cf-3a50-49d9-9257-055e0223756a-kube-api-access-lv4qj\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.358474 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53d101cf-3a50-49d9-9257-055e0223756a-service-ca\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.358709 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/53d101cf-3a50-49d9-9257-055e0223756a-console-oauth-config\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.459615 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53d101cf-3a50-49d9-9257-055e0223756a-service-ca\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.459667 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/53d101cf-3a50-49d9-9257-055e0223756a-console-oauth-config\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.459699 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/53d101cf-3a50-49d9-9257-055e0223756a-console-serving-cert\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.459741 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53d101cf-3a50-49d9-9257-055e0223756a-trusted-ca-bundle\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.459780 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/53d101cf-3a50-49d9-9257-055e0223756a-oauth-serving-cert\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.459794 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/53d101cf-3a50-49d9-9257-055e0223756a-console-config\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.459834 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv4qj\" (UniqueName: \"kubernetes.io/projected/53d101cf-3a50-49d9-9257-055e0223756a-kube-api-access-lv4qj\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.461560 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/53d101cf-3a50-49d9-9257-055e0223756a-oauth-serving-cert\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.462158 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53d101cf-3a50-49d9-9257-055e0223756a-trusted-ca-bundle\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.463779 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/53d101cf-3a50-49d9-9257-055e0223756a-console-config\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.464420 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53d101cf-3a50-49d9-9257-055e0223756a-service-ca\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.464884 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/53d101cf-3a50-49d9-9257-055e0223756a-console-oauth-config\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.471668 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/53d101cf-3a50-49d9-9257-055e0223756a-console-serving-cert\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.479239 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv4qj\" (UniqueName: \"kubernetes.io/projected/53d101cf-3a50-49d9-9257-055e0223756a-kube-api-access-lv4qj\") pod \"console-66c7f96c54-2q6xg\" (UID: \"53d101cf-3a50-49d9-9257-055e0223756a\") " pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.542988 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx"] Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.566840 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.664719 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7b21031-5a8e-4894-b583-c98cfd281944-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dbhgn\" (UID: \"d7b21031-5a8e-4894-b583-c98cfd281944\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.670580 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7b21031-5a8e-4894-b583-c98cfd281944-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-dbhgn\" (UID: \"d7b21031-5a8e-4894-b583-c98cfd281944\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.670772 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-nzjdc"] Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.756797 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.958604 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn"] Jan 29 15:27:41 crc kubenswrapper[4757]: W0129 15:27:41.968026 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7b21031_5a8e_4894_b583_c98cfd281944.slice/crio-5a0f3b911018cc9ab5cd2a318f7739228fbe1c3dcc9f548c8e852bc339db397c WatchSource:0}: Error finding container 5a0f3b911018cc9ab5cd2a318f7739228fbe1c3dcc9f548c8e852bc339db397c: Status 404 returned error can't find the container with id 5a0f3b911018cc9ab5cd2a318f7739228fbe1c3dcc9f548c8e852bc339db397c Jan 29 15:27:41 crc kubenswrapper[4757]: I0129 15:27:41.991117 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-66c7f96c54-2q6xg"] Jan 29 15:27:41 crc kubenswrapper[4757]: W0129 15:27:41.993472 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53d101cf_3a50_49d9_9257_055e0223756a.slice/crio-a80b6a63775bf15beff96543b65501a2a29e73091a9814f95d8e86152f719483 WatchSource:0}: Error finding container a80b6a63775bf15beff96543b65501a2a29e73091a9814f95d8e86152f719483: Status 404 returned error can't find the container with id a80b6a63775bf15beff96543b65501a2a29e73091a9814f95d8e86152f719483 Jan 29 15:27:42 crc kubenswrapper[4757]: I0129 15:27:42.168012 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" event={"ID":"d7b21031-5a8e-4894-b583-c98cfd281944","Type":"ContainerStarted","Data":"5a0f3b911018cc9ab5cd2a318f7739228fbe1c3dcc9f548c8e852bc339db397c"} Jan 29 15:27:42 crc kubenswrapper[4757]: I0129 15:27:42.169157 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-c8zb9" event={"ID":"a192d652-3d56-4191-908d-6f0241a07573","Type":"ContainerStarted","Data":"917e78b03310a9fde22fc2db3c76beba6f8d794922356571f87e829127664e53"} Jan 29 15:27:42 crc kubenswrapper[4757]: I0129 15:27:42.170306 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx" event={"ID":"6877b102-1cc7-4306-93db-567d7f162a2a","Type":"ContainerStarted","Data":"9b622bb34d54d2c96e0284e0fa5d09b6076ef070d115d4164fc4d2207217ca59"} Jan 29 15:27:42 crc kubenswrapper[4757]: I0129 15:27:42.171211 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nzjdc" event={"ID":"27377a37-8829-4efd-9df9-4804bc4689fc","Type":"ContainerStarted","Data":"8a57f1c246bc6884b6c0ffe8235d7acece62b6b997e72defe46b305c0957316c"} Jan 29 15:27:42 crc kubenswrapper[4757]: I0129 15:27:42.172667 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66c7f96c54-2q6xg" event={"ID":"53d101cf-3a50-49d9-9257-055e0223756a","Type":"ContainerStarted","Data":"2de6474bcf4e7489cfd20e2d9c862d5467afb68ae570afdc09110e83fc6f88c1"} Jan 29 15:27:42 crc kubenswrapper[4757]: I0129 15:27:42.172692 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-66c7f96c54-2q6xg" event={"ID":"53d101cf-3a50-49d9-9257-055e0223756a","Type":"ContainerStarted","Data":"a80b6a63775bf15beff96543b65501a2a29e73091a9814f95d8e86152f719483"} Jan 29 15:27:42 crc kubenswrapper[4757]: I0129 15:27:42.191988 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-66c7f96c54-2q6xg" podStartSLOduration=1.19197092 podStartE2EDuration="1.19197092s" podCreationTimestamp="2026-01-29 15:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:27:42.191583719 +0000 UTC m=+1025.480833966" watchObservedRunningTime="2026-01-29 15:27:42.19197092 +0000 UTC m=+1025.481221147" Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.212191 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nzjdc" event={"ID":"27377a37-8829-4efd-9df9-4804bc4689fc","Type":"ContainerStarted","Data":"75e9acb4676775b7a4214c570d3680c41b342a9dfb1725ee954e3fbde1a3bed1"} Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.266277 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jwxrh"] Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.267444 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.278202 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jwxrh"] Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.403950 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fdkm\" (UniqueName: \"kubernetes.io/projected/4ac83783-a378-4456-a18a-a9c1d6ff87bb-kube-api-access-4fdkm\") pod \"community-operators-jwxrh\" (UID: \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\") " pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.404052 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac83783-a378-4456-a18a-a9c1d6ff87bb-utilities\") pod \"community-operators-jwxrh\" (UID: \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\") " pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.404076 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac83783-a378-4456-a18a-a9c1d6ff87bb-catalog-content\") pod \"community-operators-jwxrh\" (UID: \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\") " pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.505336 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fdkm\" (UniqueName: \"kubernetes.io/projected/4ac83783-a378-4456-a18a-a9c1d6ff87bb-kube-api-access-4fdkm\") pod \"community-operators-jwxrh\" (UID: \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\") " pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.505418 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac83783-a378-4456-a18a-a9c1d6ff87bb-catalog-content\") pod \"community-operators-jwxrh\" (UID: \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\") " pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.505441 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac83783-a378-4456-a18a-a9c1d6ff87bb-utilities\") pod \"community-operators-jwxrh\" (UID: \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\") " pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.506023 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac83783-a378-4456-a18a-a9c1d6ff87bb-utilities\") pod \"community-operators-jwxrh\" (UID: \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\") " pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.506346 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac83783-a378-4456-a18a-a9c1d6ff87bb-catalog-content\") pod \"community-operators-jwxrh\" (UID: \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\") " pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.527231 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fdkm\" (UniqueName: \"kubernetes.io/projected/4ac83783-a378-4456-a18a-a9c1d6ff87bb-kube-api-access-4fdkm\") pod \"community-operators-jwxrh\" (UID: \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\") " pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.588359 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:44 crc kubenswrapper[4757]: I0129 15:27:44.838906 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jwxrh"] Jan 29 15:27:45 crc kubenswrapper[4757]: I0129 15:27:45.220321 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx" event={"ID":"6877b102-1cc7-4306-93db-567d7f162a2a","Type":"ContainerStarted","Data":"d693adf7626a90ab2a050af926d7875ed2298a9999b0fd0222eef3ae86a78aec"} Jan 29 15:27:45 crc kubenswrapper[4757]: I0129 15:27:45.220880 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx" Jan 29 15:27:45 crc kubenswrapper[4757]: I0129 15:27:45.224494 4757 generic.go:334] "Generic (PLEG): container finished" podID="4ac83783-a378-4456-a18a-a9c1d6ff87bb" containerID="965a754046de04acd02863efaec00b0d69948c4265823165ae611dc36069557f" exitCode=0 Jan 29 15:27:45 crc kubenswrapper[4757]: I0129 15:27:45.224542 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwxrh" event={"ID":"4ac83783-a378-4456-a18a-a9c1d6ff87bb","Type":"ContainerDied","Data":"965a754046de04acd02863efaec00b0d69948c4265823165ae611dc36069557f"} Jan 29 15:27:45 crc kubenswrapper[4757]: I0129 15:27:45.224561 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwxrh" event={"ID":"4ac83783-a378-4456-a18a-a9c1d6ff87bb","Type":"ContainerStarted","Data":"9a716bb6ba73dd1f0faa00dd3a592bbeeae44358bbefcac047e7529799011a0d"} Jan 29 15:27:45 crc kubenswrapper[4757]: I0129 15:27:45.227156 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-c8zb9" event={"ID":"a192d652-3d56-4191-908d-6f0241a07573","Type":"ContainerStarted","Data":"83edc77a2ff6dd03fa90fe44a41cd563d13e7499aa7d3b24fcdf840bd110aa22"} Jan 29 15:27:45 crc kubenswrapper[4757]: I0129 15:27:45.227646 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:45 crc kubenswrapper[4757]: I0129 15:27:45.243236 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx" podStartSLOduration=2.782401503 podStartE2EDuration="5.243215336s" podCreationTimestamp="2026-01-29 15:27:40 +0000 UTC" firstStartedPulling="2026-01-29 15:27:41.55229514 +0000 UTC m=+1024.841545377" lastFinishedPulling="2026-01-29 15:27:44.013108973 +0000 UTC m=+1027.302359210" observedRunningTime="2026-01-29 15:27:45.234596275 +0000 UTC m=+1028.523846512" watchObservedRunningTime="2026-01-29 15:27:45.243215336 +0000 UTC m=+1028.532465573" Jan 29 15:27:45 crc kubenswrapper[4757]: I0129 15:27:45.266992 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-c8zb9" podStartSLOduration=2.606333133 podStartE2EDuration="5.266969969s" podCreationTimestamp="2026-01-29 15:27:40 +0000 UTC" firstStartedPulling="2026-01-29 15:27:41.316623101 +0000 UTC m=+1024.605873338" lastFinishedPulling="2026-01-29 15:27:43.977259907 +0000 UTC m=+1027.266510174" observedRunningTime="2026-01-29 15:27:45.261440998 +0000 UTC m=+1028.550691265" watchObservedRunningTime="2026-01-29 15:27:45.266969969 +0000 UTC m=+1028.556220216" Jan 29 15:27:46 crc kubenswrapper[4757]: I0129 15:27:46.235803 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" event={"ID":"d7b21031-5a8e-4894-b583-c98cfd281944","Type":"ContainerStarted","Data":"2c71bb51d21b1278d77a8552a594580238acbe0e4512aede849316b522bba9d3"} Jan 29 15:27:46 crc kubenswrapper[4757]: I0129 15:27:46.252358 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-dbhgn" podStartSLOduration=1.618027146 podStartE2EDuration="5.252314559s" podCreationTimestamp="2026-01-29 15:27:41 +0000 UTC" firstStartedPulling="2026-01-29 15:27:41.969793656 +0000 UTC m=+1025.259043893" lastFinishedPulling="2026-01-29 15:27:45.604081059 +0000 UTC m=+1028.893331306" observedRunningTime="2026-01-29 15:27:46.251434013 +0000 UTC m=+1029.540684250" watchObservedRunningTime="2026-01-29 15:27:46.252314559 +0000 UTC m=+1029.541564816" Jan 29 15:27:47 crc kubenswrapper[4757]: I0129 15:27:47.244554 4757 generic.go:334] "Generic (PLEG): container finished" podID="4ac83783-a378-4456-a18a-a9c1d6ff87bb" containerID="65d804c18ccdc4ff3e25edfca9a409b8d2874b4dcde60418c9a39a8e9badf8bd" exitCode=0 Jan 29 15:27:47 crc kubenswrapper[4757]: I0129 15:27:47.244620 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwxrh" event={"ID":"4ac83783-a378-4456-a18a-a9c1d6ff87bb","Type":"ContainerDied","Data":"65d804c18ccdc4ff3e25edfca9a409b8d2874b4dcde60418c9a39a8e9badf8bd"} Jan 29 15:27:48 crc kubenswrapper[4757]: I0129 15:27:48.258649 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwxrh" event={"ID":"4ac83783-a378-4456-a18a-a9c1d6ff87bb","Type":"ContainerStarted","Data":"5c8d11af99a2ddd03684fe0c67172809b68d17814f60f6dbd5a77f0b128ff974"} Jan 29 15:27:48 crc kubenswrapper[4757]: I0129 15:27:48.279789 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jwxrh" podStartSLOduration=2.159787278 podStartE2EDuration="4.279769353s" podCreationTimestamp="2026-01-29 15:27:44 +0000 UTC" firstStartedPulling="2026-01-29 15:27:45.523244069 +0000 UTC m=+1028.812494306" lastFinishedPulling="2026-01-29 15:27:47.643226144 +0000 UTC m=+1030.932476381" observedRunningTime="2026-01-29 15:27:48.277319882 +0000 UTC m=+1031.566570159" watchObservedRunningTime="2026-01-29 15:27:48.279769353 +0000 UTC m=+1031.569019600" Jan 29 15:27:51 crc kubenswrapper[4757]: I0129 15:27:51.290546 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-c8zb9" Jan 29 15:27:51 crc kubenswrapper[4757]: I0129 15:27:51.568048 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:51 crc kubenswrapper[4757]: I0129 15:27:51.568132 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:51 crc kubenswrapper[4757]: I0129 15:27:51.577396 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:52 crc kubenswrapper[4757]: I0129 15:27:52.304732 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-66c7f96c54-2q6xg" Jan 29 15:27:52 crc kubenswrapper[4757]: I0129 15:27:52.371517 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-skxmw"] Jan 29 15:27:53 crc kubenswrapper[4757]: I0129 15:27:53.542466 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-nzjdc" event={"ID":"27377a37-8829-4efd-9df9-4804bc4689fc","Type":"ContainerStarted","Data":"c6e7843a56d725944d4e72f6dd0e40fb1e57c11404fe72dd8059c26b875cc1d0"} Jan 29 15:27:53 crc kubenswrapper[4757]: I0129 15:27:53.565353 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-nzjdc" podStartSLOduration=2.435658843 podStartE2EDuration="13.565326871s" podCreationTimestamp="2026-01-29 15:27:40 +0000 UTC" firstStartedPulling="2026-01-29 15:27:41.679973147 +0000 UTC m=+1024.969223384" lastFinishedPulling="2026-01-29 15:27:52.809641175 +0000 UTC m=+1036.098891412" observedRunningTime="2026-01-29 15:27:53.562369525 +0000 UTC m=+1036.851619852" watchObservedRunningTime="2026-01-29 15:27:53.565326871 +0000 UTC m=+1036.854577148" Jan 29 15:27:54 crc kubenswrapper[4757]: I0129 15:27:54.588778 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:54 crc kubenswrapper[4757]: I0129 15:27:54.589510 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:54 crc kubenswrapper[4757]: I0129 15:27:54.644669 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:55 crc kubenswrapper[4757]: I0129 15:27:55.632263 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:57 crc kubenswrapper[4757]: I0129 15:27:57.059402 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jwxrh"] Jan 29 15:27:57 crc kubenswrapper[4757]: I0129 15:27:57.564367 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jwxrh" podUID="4ac83783-a378-4456-a18a-a9c1d6ff87bb" containerName="registry-server" containerID="cri-o://5c8d11af99a2ddd03684fe0c67172809b68d17814f60f6dbd5a77f0b128ff974" gracePeriod=2 Jan 29 15:27:57 crc kubenswrapper[4757]: I0129 15:27:57.945340 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:57 crc kubenswrapper[4757]: I0129 15:27:57.982299 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fdkm\" (UniqueName: \"kubernetes.io/projected/4ac83783-a378-4456-a18a-a9c1d6ff87bb-kube-api-access-4fdkm\") pod \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\" (UID: \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\") " Jan 29 15:27:57 crc kubenswrapper[4757]: I0129 15:27:57.982364 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac83783-a378-4456-a18a-a9c1d6ff87bb-catalog-content\") pod \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\" (UID: \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\") " Jan 29 15:27:57 crc kubenswrapper[4757]: I0129 15:27:57.982426 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac83783-a378-4456-a18a-a9c1d6ff87bb-utilities\") pod \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\" (UID: \"4ac83783-a378-4456-a18a-a9c1d6ff87bb\") " Jan 29 15:27:57 crc kubenswrapper[4757]: I0129 15:27:57.983297 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac83783-a378-4456-a18a-a9c1d6ff87bb-utilities" (OuterVolumeSpecName: "utilities") pod "4ac83783-a378-4456-a18a-a9c1d6ff87bb" (UID: "4ac83783-a378-4456-a18a-a9c1d6ff87bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:57 crc kubenswrapper[4757]: I0129 15:27:57.990390 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac83783-a378-4456-a18a-a9c1d6ff87bb-kube-api-access-4fdkm" (OuterVolumeSpecName: "kube-api-access-4fdkm") pod "4ac83783-a378-4456-a18a-a9c1d6ff87bb" (UID: "4ac83783-a378-4456-a18a-a9c1d6ff87bb"). InnerVolumeSpecName "kube-api-access-4fdkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.033191 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac83783-a378-4456-a18a-a9c1d6ff87bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ac83783-a378-4456-a18a-a9c1d6ff87bb" (UID: "4ac83783-a378-4456-a18a-a9c1d6ff87bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.083574 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fdkm\" (UniqueName: \"kubernetes.io/projected/4ac83783-a378-4456-a18a-a9c1d6ff87bb-kube-api-access-4fdkm\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.083614 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac83783-a378-4456-a18a-a9c1d6ff87bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.083627 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac83783-a378-4456-a18a-a9c1d6ff87bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.572254 4757 generic.go:334] "Generic (PLEG): container finished" podID="4ac83783-a378-4456-a18a-a9c1d6ff87bb" containerID="5c8d11af99a2ddd03684fe0c67172809b68d17814f60f6dbd5a77f0b128ff974" exitCode=0 Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.572351 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jwxrh" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.572315 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwxrh" event={"ID":"4ac83783-a378-4456-a18a-a9c1d6ff87bb","Type":"ContainerDied","Data":"5c8d11af99a2ddd03684fe0c67172809b68d17814f60f6dbd5a77f0b128ff974"} Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.572780 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jwxrh" event={"ID":"4ac83783-a378-4456-a18a-a9c1d6ff87bb","Type":"ContainerDied","Data":"9a716bb6ba73dd1f0faa00dd3a592bbeeae44358bbefcac047e7529799011a0d"} Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.572857 4757 scope.go:117] "RemoveContainer" containerID="5c8d11af99a2ddd03684fe0c67172809b68d17814f60f6dbd5a77f0b128ff974" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.607669 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jwxrh"] Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.609592 4757 scope.go:117] "RemoveContainer" containerID="65d804c18ccdc4ff3e25edfca9a409b8d2874b4dcde60418c9a39a8e9badf8bd" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.611911 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jwxrh"] Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.640565 4757 scope.go:117] "RemoveContainer" containerID="965a754046de04acd02863efaec00b0d69948c4265823165ae611dc36069557f" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.664318 4757 scope.go:117] "RemoveContainer" containerID="5c8d11af99a2ddd03684fe0c67172809b68d17814f60f6dbd5a77f0b128ff974" Jan 29 15:27:58 crc kubenswrapper[4757]: E0129 15:27:58.664830 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c8d11af99a2ddd03684fe0c67172809b68d17814f60f6dbd5a77f0b128ff974\": container with ID starting with 5c8d11af99a2ddd03684fe0c67172809b68d17814f60f6dbd5a77f0b128ff974 not found: ID does not exist" containerID="5c8d11af99a2ddd03684fe0c67172809b68d17814f60f6dbd5a77f0b128ff974" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.664881 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c8d11af99a2ddd03684fe0c67172809b68d17814f60f6dbd5a77f0b128ff974"} err="failed to get container status \"5c8d11af99a2ddd03684fe0c67172809b68d17814f60f6dbd5a77f0b128ff974\": rpc error: code = NotFound desc = could not find container \"5c8d11af99a2ddd03684fe0c67172809b68d17814f60f6dbd5a77f0b128ff974\": container with ID starting with 5c8d11af99a2ddd03684fe0c67172809b68d17814f60f6dbd5a77f0b128ff974 not found: ID does not exist" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.664915 4757 scope.go:117] "RemoveContainer" containerID="65d804c18ccdc4ff3e25edfca9a409b8d2874b4dcde60418c9a39a8e9badf8bd" Jan 29 15:27:58 crc kubenswrapper[4757]: E0129 15:27:58.665308 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65d804c18ccdc4ff3e25edfca9a409b8d2874b4dcde60418c9a39a8e9badf8bd\": container with ID starting with 65d804c18ccdc4ff3e25edfca9a409b8d2874b4dcde60418c9a39a8e9badf8bd not found: ID does not exist" containerID="65d804c18ccdc4ff3e25edfca9a409b8d2874b4dcde60418c9a39a8e9badf8bd" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.665338 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65d804c18ccdc4ff3e25edfca9a409b8d2874b4dcde60418c9a39a8e9badf8bd"} err="failed to get container status \"65d804c18ccdc4ff3e25edfca9a409b8d2874b4dcde60418c9a39a8e9badf8bd\": rpc error: code = NotFound desc = could not find container \"65d804c18ccdc4ff3e25edfca9a409b8d2874b4dcde60418c9a39a8e9badf8bd\": container with ID starting with 65d804c18ccdc4ff3e25edfca9a409b8d2874b4dcde60418c9a39a8e9badf8bd not found: ID does not exist" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.665365 4757 scope.go:117] "RemoveContainer" containerID="965a754046de04acd02863efaec00b0d69948c4265823165ae611dc36069557f" Jan 29 15:27:58 crc kubenswrapper[4757]: E0129 15:27:58.665746 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"965a754046de04acd02863efaec00b0d69948c4265823165ae611dc36069557f\": container with ID starting with 965a754046de04acd02863efaec00b0d69948c4265823165ae611dc36069557f not found: ID does not exist" containerID="965a754046de04acd02863efaec00b0d69948c4265823165ae611dc36069557f" Jan 29 15:27:58 crc kubenswrapper[4757]: I0129 15:27:58.665784 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965a754046de04acd02863efaec00b0d69948c4265823165ae611dc36069557f"} err="failed to get container status \"965a754046de04acd02863efaec00b0d69948c4265823165ae611dc36069557f\": rpc error: code = NotFound desc = could not find container \"965a754046de04acd02863efaec00b0d69948c4265823165ae611dc36069557f\": container with ID starting with 965a754046de04acd02863efaec00b0d69948c4265823165ae611dc36069557f not found: ID does not exist" Jan 29 15:27:59 crc kubenswrapper[4757]: I0129 15:27:59.409437 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac83783-a378-4456-a18a-a9c1d6ff87bb" path="/var/lib/kubelet/pods/4ac83783-a378-4456-a18a-a9c1d6ff87bb/volumes" Jan 29 15:28:01 crc kubenswrapper[4757]: I0129 15:28:01.198115 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-kgkvx" Jan 29 15:28:13 crc kubenswrapper[4757]: I0129 15:28:13.911534 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc"] Jan 29 15:28:13 crc kubenswrapper[4757]: E0129 15:28:13.914073 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac83783-a378-4456-a18a-a9c1d6ff87bb" containerName="extract-utilities" Jan 29 15:28:13 crc kubenswrapper[4757]: I0129 15:28:13.914117 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac83783-a378-4456-a18a-a9c1d6ff87bb" containerName="extract-utilities" Jan 29 15:28:13 crc kubenswrapper[4757]: E0129 15:28:13.914132 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac83783-a378-4456-a18a-a9c1d6ff87bb" containerName="registry-server" Jan 29 15:28:13 crc kubenswrapper[4757]: I0129 15:28:13.914139 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac83783-a378-4456-a18a-a9c1d6ff87bb" containerName="registry-server" Jan 29 15:28:13 crc kubenswrapper[4757]: E0129 15:28:13.914165 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac83783-a378-4456-a18a-a9c1d6ff87bb" containerName="extract-content" Jan 29 15:28:13 crc kubenswrapper[4757]: I0129 15:28:13.914172 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac83783-a378-4456-a18a-a9c1d6ff87bb" containerName="extract-content" Jan 29 15:28:13 crc kubenswrapper[4757]: I0129 15:28:13.914333 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac83783-a378-4456-a18a-a9c1d6ff87bb" containerName="registry-server" Jan 29 15:28:13 crc kubenswrapper[4757]: I0129 15:28:13.915243 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" Jan 29 15:28:13 crc kubenswrapper[4757]: I0129 15:28:13.918921 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 15:28:13 crc kubenswrapper[4757]: I0129 15:28:13.965565 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc"] Jan 29 15:28:14 crc kubenswrapper[4757]: I0129 15:28:14.109648 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/15b358ad-9ec6-457c-8876-9d3d7924e631-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc\" (UID: \"15b358ad-9ec6-457c-8876-9d3d7924e631\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" Jan 29 15:28:14 crc kubenswrapper[4757]: I0129 15:28:14.109734 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/15b358ad-9ec6-457c-8876-9d3d7924e631-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc\" (UID: \"15b358ad-9ec6-457c-8876-9d3d7924e631\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" Jan 29 15:28:14 crc kubenswrapper[4757]: I0129 15:28:14.109773 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrttj\" (UniqueName: \"kubernetes.io/projected/15b358ad-9ec6-457c-8876-9d3d7924e631-kube-api-access-vrttj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc\" (UID: \"15b358ad-9ec6-457c-8876-9d3d7924e631\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" Jan 29 15:28:14 crc kubenswrapper[4757]: I0129 15:28:14.211104 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/15b358ad-9ec6-457c-8876-9d3d7924e631-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc\" (UID: \"15b358ad-9ec6-457c-8876-9d3d7924e631\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" Jan 29 15:28:14 crc kubenswrapper[4757]: I0129 15:28:14.211170 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/15b358ad-9ec6-457c-8876-9d3d7924e631-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc\" (UID: \"15b358ad-9ec6-457c-8876-9d3d7924e631\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" Jan 29 15:28:14 crc kubenswrapper[4757]: I0129 15:28:14.211205 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrttj\" (UniqueName: \"kubernetes.io/projected/15b358ad-9ec6-457c-8876-9d3d7924e631-kube-api-access-vrttj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc\" (UID: \"15b358ad-9ec6-457c-8876-9d3d7924e631\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" Jan 29 15:28:14 crc kubenswrapper[4757]: I0129 15:28:14.212116 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/15b358ad-9ec6-457c-8876-9d3d7924e631-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc\" (UID: \"15b358ad-9ec6-457c-8876-9d3d7924e631\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" Jan 29 15:28:14 crc kubenswrapper[4757]: I0129 15:28:14.212136 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/15b358ad-9ec6-457c-8876-9d3d7924e631-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc\" (UID: \"15b358ad-9ec6-457c-8876-9d3d7924e631\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" Jan 29 15:28:14 crc kubenswrapper[4757]: I0129 15:28:14.231758 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrttj\" (UniqueName: \"kubernetes.io/projected/15b358ad-9ec6-457c-8876-9d3d7924e631-kube-api-access-vrttj\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc\" (UID: \"15b358ad-9ec6-457c-8876-9d3d7924e631\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" Jan 29 15:28:14 crc kubenswrapper[4757]: I0129 15:28:14.239743 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" Jan 29 15:28:14 crc kubenswrapper[4757]: I0129 15:28:14.684968 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc"] Jan 29 15:28:14 crc kubenswrapper[4757]: W0129 15:28:14.696782 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15b358ad_9ec6_457c_8876_9d3d7924e631.slice/crio-1868bae6eda8a9dcf90eddadbedd7d2563468f87303af8370121e8ea25795b5c WatchSource:0}: Error finding container 1868bae6eda8a9dcf90eddadbedd7d2563468f87303af8370121e8ea25795b5c: Status 404 returned error can't find the container with id 1868bae6eda8a9dcf90eddadbedd7d2563468f87303af8370121e8ea25795b5c Jan 29 15:28:15 crc kubenswrapper[4757]: I0129 15:28:15.680712 4757 generic.go:334] "Generic (PLEG): container finished" podID="15b358ad-9ec6-457c-8876-9d3d7924e631" containerID="db95946767667b38325bf677f720b8f4ed27d037733b77ba9f56d246357140d1" exitCode=0 Jan 29 15:28:15 crc kubenswrapper[4757]: I0129 15:28:15.680920 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" event={"ID":"15b358ad-9ec6-457c-8876-9d3d7924e631","Type":"ContainerDied","Data":"db95946767667b38325bf677f720b8f4ed27d037733b77ba9f56d246357140d1"} Jan 29 15:28:15 crc kubenswrapper[4757]: I0129 15:28:15.681079 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" event={"ID":"15b358ad-9ec6-457c-8876-9d3d7924e631","Type":"ContainerStarted","Data":"1868bae6eda8a9dcf90eddadbedd7d2563468f87303af8370121e8ea25795b5c"} Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.425244 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-skxmw" podUID="a0f71154-b1ff-4e61-9c93-8bcb95678bce" containerName="console" containerID="cri-o://7d99940842de2d9a9f4d7a3901a12cb270cad2378f3a2ced7d21e743ecf4bec7" gracePeriod=15 Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.692290 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-skxmw_a0f71154-b1ff-4e61-9c93-8bcb95678bce/console/0.log" Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.692325 4757 generic.go:334] "Generic (PLEG): container finished" podID="a0f71154-b1ff-4e61-9c93-8bcb95678bce" containerID="7d99940842de2d9a9f4d7a3901a12cb270cad2378f3a2ced7d21e743ecf4bec7" exitCode=2 Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.692370 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-skxmw" event={"ID":"a0f71154-b1ff-4e61-9c93-8bcb95678bce","Type":"ContainerDied","Data":"7d99940842de2d9a9f4d7a3901a12cb270cad2378f3a2ced7d21e743ecf4bec7"} Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.694033 4757 generic.go:334] "Generic (PLEG): container finished" podID="15b358ad-9ec6-457c-8876-9d3d7924e631" containerID="8436cfa7de44781631b7580d483f6ae6c97999360a0b673b96fe61164ac9ef51" exitCode=0 Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.694064 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" event={"ID":"15b358ad-9ec6-457c-8876-9d3d7924e631","Type":"ContainerDied","Data":"8436cfa7de44781631b7580d483f6ae6c97999360a0b673b96fe61164ac9ef51"} Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.792818 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-skxmw_a0f71154-b1ff-4e61-9c93-8bcb95678bce/console/0.log" Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.792899 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.954298 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-oauth-config\") pod \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.955365 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-oauth-serving-cert\") pod \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.955411 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcp8m\" (UniqueName: \"kubernetes.io/projected/a0f71154-b1ff-4e61-9c93-8bcb95678bce-kube-api-access-kcp8m\") pod \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.955450 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-serving-cert\") pod \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.955467 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-service-ca\") pod \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.955501 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-trusted-ca-bundle\") pod \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.955530 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-config\") pod \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\" (UID: \"a0f71154-b1ff-4e61-9c93-8bcb95678bce\") " Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.956322 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-config" (OuterVolumeSpecName: "console-config") pod "a0f71154-b1ff-4e61-9c93-8bcb95678bce" (UID: "a0f71154-b1ff-4e61-9c93-8bcb95678bce"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.956343 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-service-ca" (OuterVolumeSpecName: "service-ca") pod "a0f71154-b1ff-4e61-9c93-8bcb95678bce" (UID: "a0f71154-b1ff-4e61-9c93-8bcb95678bce"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.956471 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a0f71154-b1ff-4e61-9c93-8bcb95678bce" (UID: "a0f71154-b1ff-4e61-9c93-8bcb95678bce"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.956512 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a0f71154-b1ff-4e61-9c93-8bcb95678bce" (UID: "a0f71154-b1ff-4e61-9c93-8bcb95678bce"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.960311 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a0f71154-b1ff-4e61-9c93-8bcb95678bce" (UID: "a0f71154-b1ff-4e61-9c93-8bcb95678bce"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.960383 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0f71154-b1ff-4e61-9c93-8bcb95678bce-kube-api-access-kcp8m" (OuterVolumeSpecName: "kube-api-access-kcp8m") pod "a0f71154-b1ff-4e61-9c93-8bcb95678bce" (UID: "a0f71154-b1ff-4e61-9c93-8bcb95678bce"). InnerVolumeSpecName "kube-api-access-kcp8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:17 crc kubenswrapper[4757]: I0129 15:28:17.960682 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a0f71154-b1ff-4e61-9c93-8bcb95678bce" (UID: "a0f71154-b1ff-4e61-9c93-8bcb95678bce"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.066487 4757 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.066527 4757 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.066540 4757 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.066556 4757 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.066573 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcp8m\" (UniqueName: \"kubernetes.io/projected/a0f71154-b1ff-4e61-9c93-8bcb95678bce-kube-api-access-kcp8m\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.066587 4757 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0f71154-b1ff-4e61-9c93-8bcb95678bce-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.066599 4757 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0f71154-b1ff-4e61-9c93-8bcb95678bce-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.702255 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-skxmw_a0f71154-b1ff-4e61-9c93-8bcb95678bce/console/0.log" Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.702630 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-skxmw" event={"ID":"a0f71154-b1ff-4e61-9c93-8bcb95678bce","Type":"ContainerDied","Data":"6623043e5d5ee87ab09656f41ca181c2e045e217709097f1a3dab3c981305c89"} Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.702721 4757 scope.go:117] "RemoveContainer" containerID="7d99940842de2d9a9f4d7a3901a12cb270cad2378f3a2ced7d21e743ecf4bec7" Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.702751 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-skxmw" Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.708489 4757 generic.go:334] "Generic (PLEG): container finished" podID="15b358ad-9ec6-457c-8876-9d3d7924e631" containerID="b12757ccae1787f4693a0b2e21c296929020bd0f4398a87d0353b630ea9fd056" exitCode=0 Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.708562 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" event={"ID":"15b358ad-9ec6-457c-8876-9d3d7924e631","Type":"ContainerDied","Data":"b12757ccae1787f4693a0b2e21c296929020bd0f4398a87d0353b630ea9fd056"} Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.756106 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-skxmw"] Jan 29 15:28:18 crc kubenswrapper[4757]: I0129 15:28:18.761603 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-skxmw"] Jan 29 15:28:19 crc kubenswrapper[4757]: I0129 15:28:19.406692 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0f71154-b1ff-4e61-9c93-8bcb95678bce" path="/var/lib/kubelet/pods/a0f71154-b1ff-4e61-9c93-8bcb95678bce/volumes" Jan 29 15:28:19 crc kubenswrapper[4757]: I0129 15:28:19.968371 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" Jan 29 15:28:20 crc kubenswrapper[4757]: I0129 15:28:19.990546 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrttj\" (UniqueName: \"kubernetes.io/projected/15b358ad-9ec6-457c-8876-9d3d7924e631-kube-api-access-vrttj\") pod \"15b358ad-9ec6-457c-8876-9d3d7924e631\" (UID: \"15b358ad-9ec6-457c-8876-9d3d7924e631\") " Jan 29 15:28:20 crc kubenswrapper[4757]: I0129 15:28:19.990687 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/15b358ad-9ec6-457c-8876-9d3d7924e631-bundle\") pod \"15b358ad-9ec6-457c-8876-9d3d7924e631\" (UID: \"15b358ad-9ec6-457c-8876-9d3d7924e631\") " Jan 29 15:28:20 crc kubenswrapper[4757]: I0129 15:28:19.990706 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/15b358ad-9ec6-457c-8876-9d3d7924e631-util\") pod \"15b358ad-9ec6-457c-8876-9d3d7924e631\" (UID: \"15b358ad-9ec6-457c-8876-9d3d7924e631\") " Jan 29 15:28:20 crc kubenswrapper[4757]: I0129 15:28:19.991703 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15b358ad-9ec6-457c-8876-9d3d7924e631-bundle" (OuterVolumeSpecName: "bundle") pod "15b358ad-9ec6-457c-8876-9d3d7924e631" (UID: "15b358ad-9ec6-457c-8876-9d3d7924e631"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:20 crc kubenswrapper[4757]: I0129 15:28:20.003561 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b358ad-9ec6-457c-8876-9d3d7924e631-kube-api-access-vrttj" (OuterVolumeSpecName: "kube-api-access-vrttj") pod "15b358ad-9ec6-457c-8876-9d3d7924e631" (UID: "15b358ad-9ec6-457c-8876-9d3d7924e631"). InnerVolumeSpecName "kube-api-access-vrttj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:20 crc kubenswrapper[4757]: I0129 15:28:20.091709 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrttj\" (UniqueName: \"kubernetes.io/projected/15b358ad-9ec6-457c-8876-9d3d7924e631-kube-api-access-vrttj\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:20 crc kubenswrapper[4757]: I0129 15:28:20.091758 4757 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/15b358ad-9ec6-457c-8876-9d3d7924e631-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:20 crc kubenswrapper[4757]: I0129 15:28:20.204414 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15b358ad-9ec6-457c-8876-9d3d7924e631-util" (OuterVolumeSpecName: "util") pod "15b358ad-9ec6-457c-8876-9d3d7924e631" (UID: "15b358ad-9ec6-457c-8876-9d3d7924e631"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:20 crc kubenswrapper[4757]: I0129 15:28:20.293498 4757 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/15b358ad-9ec6-457c-8876-9d3d7924e631-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:20 crc kubenswrapper[4757]: I0129 15:28:20.722381 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" event={"ID":"15b358ad-9ec6-457c-8876-9d3d7924e631","Type":"ContainerDied","Data":"1868bae6eda8a9dcf90eddadbedd7d2563468f87303af8370121e8ea25795b5c"} Jan 29 15:28:20 crc kubenswrapper[4757]: I0129 15:28:20.722665 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1868bae6eda8a9dcf90eddadbedd7d2563468f87303af8370121e8ea25795b5c" Jan 29 15:28:20 crc kubenswrapper[4757]: I0129 15:28:20.722459 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc" Jan 29 15:28:24 crc kubenswrapper[4757]: I0129 15:28:24.862750 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2nhvc"] Jan 29 15:28:24 crc kubenswrapper[4757]: E0129 15:28:24.862994 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b358ad-9ec6-457c-8876-9d3d7924e631" containerName="pull" Jan 29 15:28:24 crc kubenswrapper[4757]: I0129 15:28:24.863007 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b358ad-9ec6-457c-8876-9d3d7924e631" containerName="pull" Jan 29 15:28:24 crc kubenswrapper[4757]: E0129 15:28:24.863014 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b358ad-9ec6-457c-8876-9d3d7924e631" containerName="util" Jan 29 15:28:24 crc kubenswrapper[4757]: I0129 15:28:24.863020 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b358ad-9ec6-457c-8876-9d3d7924e631" containerName="util" Jan 29 15:28:24 crc kubenswrapper[4757]: E0129 15:28:24.863036 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b358ad-9ec6-457c-8876-9d3d7924e631" containerName="extract" Jan 29 15:28:24 crc kubenswrapper[4757]: I0129 15:28:24.863043 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b358ad-9ec6-457c-8876-9d3d7924e631" containerName="extract" Jan 29 15:28:24 crc kubenswrapper[4757]: E0129 15:28:24.863055 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f71154-b1ff-4e61-9c93-8bcb95678bce" containerName="console" Jan 29 15:28:24 crc kubenswrapper[4757]: I0129 15:28:24.863061 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f71154-b1ff-4e61-9c93-8bcb95678bce" containerName="console" Jan 29 15:28:24 crc kubenswrapper[4757]: I0129 15:28:24.863171 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f71154-b1ff-4e61-9c93-8bcb95678bce" containerName="console" Jan 29 15:28:24 crc kubenswrapper[4757]: I0129 15:28:24.863189 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b358ad-9ec6-457c-8876-9d3d7924e631" containerName="extract" Jan 29 15:28:24 crc kubenswrapper[4757]: I0129 15:28:24.864059 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:24 crc kubenswrapper[4757]: I0129 15:28:24.878834 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2nhvc"] Jan 29 15:28:25 crc kubenswrapper[4757]: I0129 15:28:25.052910 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flnp5\" (UniqueName: \"kubernetes.io/projected/256be88b-5169-453f-9b20-e59f8539f582-kube-api-access-flnp5\") pod \"certified-operators-2nhvc\" (UID: \"256be88b-5169-453f-9b20-e59f8539f582\") " pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:25 crc kubenswrapper[4757]: I0129 15:28:25.052977 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/256be88b-5169-453f-9b20-e59f8539f582-utilities\") pod \"certified-operators-2nhvc\" (UID: \"256be88b-5169-453f-9b20-e59f8539f582\") " pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:25 crc kubenswrapper[4757]: I0129 15:28:25.053032 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/256be88b-5169-453f-9b20-e59f8539f582-catalog-content\") pod \"certified-operators-2nhvc\" (UID: \"256be88b-5169-453f-9b20-e59f8539f582\") " pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:25 crc kubenswrapper[4757]: I0129 15:28:25.154601 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/256be88b-5169-453f-9b20-e59f8539f582-utilities\") pod \"certified-operators-2nhvc\" (UID: \"256be88b-5169-453f-9b20-e59f8539f582\") " pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:25 crc kubenswrapper[4757]: I0129 15:28:25.154670 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/256be88b-5169-453f-9b20-e59f8539f582-catalog-content\") pod \"certified-operators-2nhvc\" (UID: \"256be88b-5169-453f-9b20-e59f8539f582\") " pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:25 crc kubenswrapper[4757]: I0129 15:28:25.154722 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flnp5\" (UniqueName: \"kubernetes.io/projected/256be88b-5169-453f-9b20-e59f8539f582-kube-api-access-flnp5\") pod \"certified-operators-2nhvc\" (UID: \"256be88b-5169-453f-9b20-e59f8539f582\") " pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:25 crc kubenswrapper[4757]: I0129 15:28:25.155145 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/256be88b-5169-453f-9b20-e59f8539f582-utilities\") pod \"certified-operators-2nhvc\" (UID: \"256be88b-5169-453f-9b20-e59f8539f582\") " pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:25 crc kubenswrapper[4757]: I0129 15:28:25.155174 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/256be88b-5169-453f-9b20-e59f8539f582-catalog-content\") pod \"certified-operators-2nhvc\" (UID: \"256be88b-5169-453f-9b20-e59f8539f582\") " pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:25 crc kubenswrapper[4757]: I0129 15:28:25.193245 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flnp5\" (UniqueName: \"kubernetes.io/projected/256be88b-5169-453f-9b20-e59f8539f582-kube-api-access-flnp5\") pod \"certified-operators-2nhvc\" (UID: \"256be88b-5169-453f-9b20-e59f8539f582\") " pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:25 crc kubenswrapper[4757]: I0129 15:28:25.480988 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:26 crc kubenswrapper[4757]: I0129 15:28:26.049350 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2nhvc"] Jan 29 15:28:26 crc kubenswrapper[4757]: I0129 15:28:26.764082 4757 generic.go:334] "Generic (PLEG): container finished" podID="256be88b-5169-453f-9b20-e59f8539f582" containerID="de22d15b276a1b6b2294c765adda0bb306bf3ebd8944f6f6507be5e9367c7a08" exitCode=0 Jan 29 15:28:26 crc kubenswrapper[4757]: I0129 15:28:26.764406 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2nhvc" event={"ID":"256be88b-5169-453f-9b20-e59f8539f582","Type":"ContainerDied","Data":"de22d15b276a1b6b2294c765adda0bb306bf3ebd8944f6f6507be5e9367c7a08"} Jan 29 15:28:26 crc kubenswrapper[4757]: I0129 15:28:26.764435 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2nhvc" event={"ID":"256be88b-5169-453f-9b20-e59f8539f582","Type":"ContainerStarted","Data":"4b520cd39ceaff9dbf91bff410f4ee0f7e0bf655c832298e05e213dff884ee19"} Jan 29 15:28:27 crc kubenswrapper[4757]: I0129 15:28:27.772412 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2nhvc" event={"ID":"256be88b-5169-453f-9b20-e59f8539f582","Type":"ContainerStarted","Data":"42f98a19cb331d8af8626c86fee6e33a32873303f27d34f1613f7605fc4bccc1"} Jan 29 15:28:28 crc kubenswrapper[4757]: I0129 15:28:28.779080 4757 generic.go:334] "Generic (PLEG): container finished" podID="256be88b-5169-453f-9b20-e59f8539f582" containerID="42f98a19cb331d8af8626c86fee6e33a32873303f27d34f1613f7605fc4bccc1" exitCode=0 Jan 29 15:28:28 crc kubenswrapper[4757]: I0129 15:28:28.779186 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2nhvc" event={"ID":"256be88b-5169-453f-9b20-e59f8539f582","Type":"ContainerDied","Data":"42f98a19cb331d8af8626c86fee6e33a32873303f27d34f1613f7605fc4bccc1"} Jan 29 15:28:29 crc kubenswrapper[4757]: I0129 15:28:29.786226 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2nhvc" event={"ID":"256be88b-5169-453f-9b20-e59f8539f582","Type":"ContainerStarted","Data":"7961b0797986bd806218b7cefca1d225421da41d3aa59b1e0aa5b41fbf6539e5"} Jan 29 15:28:29 crc kubenswrapper[4757]: I0129 15:28:29.809224 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2nhvc" podStartSLOduration=3.30077497 podStartE2EDuration="5.809205989s" podCreationTimestamp="2026-01-29 15:28:24 +0000 UTC" firstStartedPulling="2026-01-29 15:28:26.76642866 +0000 UTC m=+1070.055678897" lastFinishedPulling="2026-01-29 15:28:29.274859689 +0000 UTC m=+1072.564109916" observedRunningTime="2026-01-29 15:28:29.809166128 +0000 UTC m=+1073.098416365" watchObservedRunningTime="2026-01-29 15:28:29.809205989 +0000 UTC m=+1073.098456226" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.778205 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4"] Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.779215 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.782395 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.782480 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.782638 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-48wvh" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.782695 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.782853 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.820729 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh4pr\" (UniqueName: \"kubernetes.io/projected/b1bec22e-bc28-4615-b6f8-e639da353268-kube-api-access-qh4pr\") pod \"metallb-operator-controller-manager-5c94d76d46-599j4\" (UID: \"b1bec22e-bc28-4615-b6f8-e639da353268\") " pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.820786 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b1bec22e-bc28-4615-b6f8-e639da353268-webhook-cert\") pod \"metallb-operator-controller-manager-5c94d76d46-599j4\" (UID: \"b1bec22e-bc28-4615-b6f8-e639da353268\") " pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.820815 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b1bec22e-bc28-4615-b6f8-e639da353268-apiservice-cert\") pod \"metallb-operator-controller-manager-5c94d76d46-599j4\" (UID: \"b1bec22e-bc28-4615-b6f8-e639da353268\") " pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.883514 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4"] Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.921560 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh4pr\" (UniqueName: \"kubernetes.io/projected/b1bec22e-bc28-4615-b6f8-e639da353268-kube-api-access-qh4pr\") pod \"metallb-operator-controller-manager-5c94d76d46-599j4\" (UID: \"b1bec22e-bc28-4615-b6f8-e639da353268\") " pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.921625 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b1bec22e-bc28-4615-b6f8-e639da353268-webhook-cert\") pod \"metallb-operator-controller-manager-5c94d76d46-599j4\" (UID: \"b1bec22e-bc28-4615-b6f8-e639da353268\") " pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.921753 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b1bec22e-bc28-4615-b6f8-e639da353268-apiservice-cert\") pod \"metallb-operator-controller-manager-5c94d76d46-599j4\" (UID: \"b1bec22e-bc28-4615-b6f8-e639da353268\") " pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.929217 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b1bec22e-bc28-4615-b6f8-e639da353268-apiservice-cert\") pod \"metallb-operator-controller-manager-5c94d76d46-599j4\" (UID: \"b1bec22e-bc28-4615-b6f8-e639da353268\") " pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.947090 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b1bec22e-bc28-4615-b6f8-e639da353268-webhook-cert\") pod \"metallb-operator-controller-manager-5c94d76d46-599j4\" (UID: \"b1bec22e-bc28-4615-b6f8-e639da353268\") " pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" Jan 29 15:28:30 crc kubenswrapper[4757]: I0129 15:28:30.953232 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh4pr\" (UniqueName: \"kubernetes.io/projected/b1bec22e-bc28-4615-b6f8-e639da353268-kube-api-access-qh4pr\") pod \"metallb-operator-controller-manager-5c94d76d46-599j4\" (UID: \"b1bec22e-bc28-4615-b6f8-e639da353268\") " pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.103230 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.104330 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph"] Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.105195 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.118074 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.118459 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-lgsd7" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.119190 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.124230 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbngv\" (UniqueName: \"kubernetes.io/projected/8481f32c-d659-4dbb-9ddf-962d17346afc-kube-api-access-bbngv\") pod \"metallb-operator-webhook-server-6d644c45b7-tjdph\" (UID: \"8481f32c-d659-4dbb-9ddf-962d17346afc\") " pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.124312 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8481f32c-d659-4dbb-9ddf-962d17346afc-apiservice-cert\") pod \"metallb-operator-webhook-server-6d644c45b7-tjdph\" (UID: \"8481f32c-d659-4dbb-9ddf-962d17346afc\") " pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.124350 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8481f32c-d659-4dbb-9ddf-962d17346afc-webhook-cert\") pod \"metallb-operator-webhook-server-6d644c45b7-tjdph\" (UID: \"8481f32c-d659-4dbb-9ddf-962d17346afc\") " pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.135015 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph"] Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.225824 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbngv\" (UniqueName: \"kubernetes.io/projected/8481f32c-d659-4dbb-9ddf-962d17346afc-kube-api-access-bbngv\") pod \"metallb-operator-webhook-server-6d644c45b7-tjdph\" (UID: \"8481f32c-d659-4dbb-9ddf-962d17346afc\") " pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.225881 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8481f32c-d659-4dbb-9ddf-962d17346afc-apiservice-cert\") pod \"metallb-operator-webhook-server-6d644c45b7-tjdph\" (UID: \"8481f32c-d659-4dbb-9ddf-962d17346afc\") " pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.225909 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8481f32c-d659-4dbb-9ddf-962d17346afc-webhook-cert\") pod \"metallb-operator-webhook-server-6d644c45b7-tjdph\" (UID: \"8481f32c-d659-4dbb-9ddf-962d17346afc\") " pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.230917 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8481f32c-d659-4dbb-9ddf-962d17346afc-webhook-cert\") pod \"metallb-operator-webhook-server-6d644c45b7-tjdph\" (UID: \"8481f32c-d659-4dbb-9ddf-962d17346afc\") " pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.232900 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8481f32c-d659-4dbb-9ddf-962d17346afc-apiservice-cert\") pod \"metallb-operator-webhook-server-6d644c45b7-tjdph\" (UID: \"8481f32c-d659-4dbb-9ddf-962d17346afc\") " pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.249625 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbngv\" (UniqueName: \"kubernetes.io/projected/8481f32c-d659-4dbb-9ddf-962d17346afc-kube-api-access-bbngv\") pod \"metallb-operator-webhook-server-6d644c45b7-tjdph\" (UID: \"8481f32c-d659-4dbb-9ddf-962d17346afc\") " pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.460411 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4"] Jan 29 15:28:31 crc kubenswrapper[4757]: W0129 15:28:31.472169 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1bec22e_bc28_4615_b6f8_e639da353268.slice/crio-2c0f9ea34c31b8940dd4e2ca90875a79de59ddfc2284a90024a909884777e511 WatchSource:0}: Error finding container 2c0f9ea34c31b8940dd4e2ca90875a79de59ddfc2284a90024a909884777e511: Status 404 returned error can't find the container with id 2c0f9ea34c31b8940dd4e2ca90875a79de59ddfc2284a90024a909884777e511 Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.475543 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.819509 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" event={"ID":"b1bec22e-bc28-4615-b6f8-e639da353268","Type":"ContainerStarted","Data":"2c0f9ea34c31b8940dd4e2ca90875a79de59ddfc2284a90024a909884777e511"} Jan 29 15:28:31 crc kubenswrapper[4757]: I0129 15:28:31.869120 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph"] Jan 29 15:28:31 crc kubenswrapper[4757]: W0129 15:28:31.875580 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8481f32c_d659_4dbb_9ddf_962d17346afc.slice/crio-caed4cabedb6c5cd017fdf9e032a8231a4c626f59ac13f95bd04c2328eb57e4e WatchSource:0}: Error finding container caed4cabedb6c5cd017fdf9e032a8231a4c626f59ac13f95bd04c2328eb57e4e: Status 404 returned error can't find the container with id caed4cabedb6c5cd017fdf9e032a8231a4c626f59ac13f95bd04c2328eb57e4e Jan 29 15:28:32 crc kubenswrapper[4757]: I0129 15:28:32.826967 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" event={"ID":"8481f32c-d659-4dbb-9ddf-962d17346afc","Type":"ContainerStarted","Data":"caed4cabedb6c5cd017fdf9e032a8231a4c626f59ac13f95bd04c2328eb57e4e"} Jan 29 15:28:35 crc kubenswrapper[4757]: I0129 15:28:35.482111 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:35 crc kubenswrapper[4757]: I0129 15:28:35.482566 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:35 crc kubenswrapper[4757]: I0129 15:28:35.528153 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:35 crc kubenswrapper[4757]: I0129 15:28:35.951937 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:36 crc kubenswrapper[4757]: I0129 15:28:36.858939 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2nhvc"] Jan 29 15:28:37 crc kubenswrapper[4757]: I0129 15:28:37.882332 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2nhvc" podUID="256be88b-5169-453f-9b20-e59f8539f582" containerName="registry-server" containerID="cri-o://7961b0797986bd806218b7cefca1d225421da41d3aa59b1e0aa5b41fbf6539e5" gracePeriod=2 Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.452482 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.544705 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flnp5\" (UniqueName: \"kubernetes.io/projected/256be88b-5169-453f-9b20-e59f8539f582-kube-api-access-flnp5\") pod \"256be88b-5169-453f-9b20-e59f8539f582\" (UID: \"256be88b-5169-453f-9b20-e59f8539f582\") " Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.544849 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/256be88b-5169-453f-9b20-e59f8539f582-utilities\") pod \"256be88b-5169-453f-9b20-e59f8539f582\" (UID: \"256be88b-5169-453f-9b20-e59f8539f582\") " Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.544917 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/256be88b-5169-453f-9b20-e59f8539f582-catalog-content\") pod \"256be88b-5169-453f-9b20-e59f8539f582\" (UID: \"256be88b-5169-453f-9b20-e59f8539f582\") " Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.545788 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/256be88b-5169-453f-9b20-e59f8539f582-utilities" (OuterVolumeSpecName: "utilities") pod "256be88b-5169-453f-9b20-e59f8539f582" (UID: "256be88b-5169-453f-9b20-e59f8539f582"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.549725 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256be88b-5169-453f-9b20-e59f8539f582-kube-api-access-flnp5" (OuterVolumeSpecName: "kube-api-access-flnp5") pod "256be88b-5169-453f-9b20-e59f8539f582" (UID: "256be88b-5169-453f-9b20-e59f8539f582"). InnerVolumeSpecName "kube-api-access-flnp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.646198 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/256be88b-5169-453f-9b20-e59f8539f582-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.646252 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flnp5\" (UniqueName: \"kubernetes.io/projected/256be88b-5169-453f-9b20-e59f8539f582-kube-api-access-flnp5\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.894916 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" event={"ID":"8481f32c-d659-4dbb-9ddf-962d17346afc","Type":"ContainerStarted","Data":"1defbf7436f0a1acdae99057ed24a503104c9ff731befdc14e75bcdbe2a9c384"} Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.895079 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.930151 4757 generic.go:334] "Generic (PLEG): container finished" podID="256be88b-5169-453f-9b20-e59f8539f582" containerID="7961b0797986bd806218b7cefca1d225421da41d3aa59b1e0aa5b41fbf6539e5" exitCode=0 Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.930230 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2nhvc" event={"ID":"256be88b-5169-453f-9b20-e59f8539f582","Type":"ContainerDied","Data":"7961b0797986bd806218b7cefca1d225421da41d3aa59b1e0aa5b41fbf6539e5"} Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.930486 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2nhvc" Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.930793 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2nhvc" event={"ID":"256be88b-5169-453f-9b20-e59f8539f582","Type":"ContainerDied","Data":"4b520cd39ceaff9dbf91bff410f4ee0f7e0bf655c832298e05e213dff884ee19"} Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.930854 4757 scope.go:117] "RemoveContainer" containerID="7961b0797986bd806218b7cefca1d225421da41d3aa59b1e0aa5b41fbf6539e5" Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.931714 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" event={"ID":"b1bec22e-bc28-4615-b6f8-e639da353268","Type":"ContainerStarted","Data":"9b9b408838cf6d662cc40d3d08bdb34b9341ff6f9269f307822235e9b53a212d"} Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.932161 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.946693 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" podStartSLOduration=1.676285894 podStartE2EDuration="7.946678807s" podCreationTimestamp="2026-01-29 15:28:31 +0000 UTC" firstStartedPulling="2026-01-29 15:28:31.878110375 +0000 UTC m=+1075.167360612" lastFinishedPulling="2026-01-29 15:28:38.148503288 +0000 UTC m=+1081.437753525" observedRunningTime="2026-01-29 15:28:38.941496457 +0000 UTC m=+1082.230746704" watchObservedRunningTime="2026-01-29 15:28:38.946678807 +0000 UTC m=+1082.235929044" Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.960174 4757 scope.go:117] "RemoveContainer" containerID="42f98a19cb331d8af8626c86fee6e33a32873303f27d34f1613f7605fc4bccc1" Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.975234 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" podStartSLOduration=2.453467113 podStartE2EDuration="8.975208474s" podCreationTimestamp="2026-01-29 15:28:30 +0000 UTC" firstStartedPulling="2026-01-29 15:28:31.475981647 +0000 UTC m=+1074.765231884" lastFinishedPulling="2026-01-29 15:28:37.997723008 +0000 UTC m=+1081.286973245" observedRunningTime="2026-01-29 15:28:38.972382922 +0000 UTC m=+1082.261633169" watchObservedRunningTime="2026-01-29 15:28:38.975208474 +0000 UTC m=+1082.264458731" Jan 29 15:28:38 crc kubenswrapper[4757]: I0129 15:28:38.990190 4757 scope.go:117] "RemoveContainer" containerID="de22d15b276a1b6b2294c765adda0bb306bf3ebd8944f6f6507be5e9367c7a08" Jan 29 15:28:39 crc kubenswrapper[4757]: I0129 15:28:39.027568 4757 scope.go:117] "RemoveContainer" containerID="7961b0797986bd806218b7cefca1d225421da41d3aa59b1e0aa5b41fbf6539e5" Jan 29 15:28:39 crc kubenswrapper[4757]: E0129 15:28:39.054474 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7961b0797986bd806218b7cefca1d225421da41d3aa59b1e0aa5b41fbf6539e5\": container with ID starting with 7961b0797986bd806218b7cefca1d225421da41d3aa59b1e0aa5b41fbf6539e5 not found: ID does not exist" containerID="7961b0797986bd806218b7cefca1d225421da41d3aa59b1e0aa5b41fbf6539e5" Jan 29 15:28:39 crc kubenswrapper[4757]: I0129 15:28:39.054531 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7961b0797986bd806218b7cefca1d225421da41d3aa59b1e0aa5b41fbf6539e5"} err="failed to get container status \"7961b0797986bd806218b7cefca1d225421da41d3aa59b1e0aa5b41fbf6539e5\": rpc error: code = NotFound desc = could not find container \"7961b0797986bd806218b7cefca1d225421da41d3aa59b1e0aa5b41fbf6539e5\": container with ID starting with 7961b0797986bd806218b7cefca1d225421da41d3aa59b1e0aa5b41fbf6539e5 not found: ID does not exist" Jan 29 15:28:39 crc kubenswrapper[4757]: I0129 15:28:39.054561 4757 scope.go:117] "RemoveContainer" containerID="42f98a19cb331d8af8626c86fee6e33a32873303f27d34f1613f7605fc4bccc1" Jan 29 15:28:39 crc kubenswrapper[4757]: E0129 15:28:39.055029 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42f98a19cb331d8af8626c86fee6e33a32873303f27d34f1613f7605fc4bccc1\": container with ID starting with 42f98a19cb331d8af8626c86fee6e33a32873303f27d34f1613f7605fc4bccc1 not found: ID does not exist" containerID="42f98a19cb331d8af8626c86fee6e33a32873303f27d34f1613f7605fc4bccc1" Jan 29 15:28:39 crc kubenswrapper[4757]: I0129 15:28:39.055141 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f98a19cb331d8af8626c86fee6e33a32873303f27d34f1613f7605fc4bccc1"} err="failed to get container status \"42f98a19cb331d8af8626c86fee6e33a32873303f27d34f1613f7605fc4bccc1\": rpc error: code = NotFound desc = could not find container \"42f98a19cb331d8af8626c86fee6e33a32873303f27d34f1613f7605fc4bccc1\": container with ID starting with 42f98a19cb331d8af8626c86fee6e33a32873303f27d34f1613f7605fc4bccc1 not found: ID does not exist" Jan 29 15:28:39 crc kubenswrapper[4757]: I0129 15:28:39.055235 4757 scope.go:117] "RemoveContainer" containerID="de22d15b276a1b6b2294c765adda0bb306bf3ebd8944f6f6507be5e9367c7a08" Jan 29 15:28:39 crc kubenswrapper[4757]: E0129 15:28:39.055775 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de22d15b276a1b6b2294c765adda0bb306bf3ebd8944f6f6507be5e9367c7a08\": container with ID starting with de22d15b276a1b6b2294c765adda0bb306bf3ebd8944f6f6507be5e9367c7a08 not found: ID does not exist" containerID="de22d15b276a1b6b2294c765adda0bb306bf3ebd8944f6f6507be5e9367c7a08" Jan 29 15:28:39 crc kubenswrapper[4757]: I0129 15:28:39.055818 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de22d15b276a1b6b2294c765adda0bb306bf3ebd8944f6f6507be5e9367c7a08"} err="failed to get container status \"de22d15b276a1b6b2294c765adda0bb306bf3ebd8944f6f6507be5e9367c7a08\": rpc error: code = NotFound desc = could not find container \"de22d15b276a1b6b2294c765adda0bb306bf3ebd8944f6f6507be5e9367c7a08\": container with ID starting with de22d15b276a1b6b2294c765adda0bb306bf3ebd8944f6f6507be5e9367c7a08 not found: ID does not exist" Jan 29 15:28:39 crc kubenswrapper[4757]: I0129 15:28:39.412143 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/256be88b-5169-453f-9b20-e59f8539f582-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "256be88b-5169-453f-9b20-e59f8539f582" (UID: "256be88b-5169-453f-9b20-e59f8539f582"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:39 crc kubenswrapper[4757]: I0129 15:28:39.459860 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/256be88b-5169-453f-9b20-e59f8539f582-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:39 crc kubenswrapper[4757]: I0129 15:28:39.557929 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2nhvc"] Jan 29 15:28:39 crc kubenswrapper[4757]: I0129 15:28:39.564697 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2nhvc"] Jan 29 15:28:41 crc kubenswrapper[4757]: I0129 15:28:41.412697 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="256be88b-5169-453f-9b20-e59f8539f582" path="/var/lib/kubelet/pods/256be88b-5169-453f-9b20-e59f8539f582/volumes" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.663689 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m4csf"] Jan 29 15:28:47 crc kubenswrapper[4757]: E0129 15:28:47.664436 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256be88b-5169-453f-9b20-e59f8539f582" containerName="registry-server" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.664451 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="256be88b-5169-453f-9b20-e59f8539f582" containerName="registry-server" Jan 29 15:28:47 crc kubenswrapper[4757]: E0129 15:28:47.664463 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256be88b-5169-453f-9b20-e59f8539f582" containerName="extract-content" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.664469 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="256be88b-5169-453f-9b20-e59f8539f582" containerName="extract-content" Jan 29 15:28:47 crc kubenswrapper[4757]: E0129 15:28:47.664484 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256be88b-5169-453f-9b20-e59f8539f582" containerName="extract-utilities" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.664491 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="256be88b-5169-453f-9b20-e59f8539f582" containerName="extract-utilities" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.665189 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="256be88b-5169-453f-9b20-e59f8539f582" containerName="registry-server" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.665913 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.747582 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4csf"] Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.761485 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-utilities\") pod \"redhat-marketplace-m4csf\" (UID: \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\") " pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.761832 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-catalog-content\") pod \"redhat-marketplace-m4csf\" (UID: \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\") " pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.761912 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk6g2\" (UniqueName: \"kubernetes.io/projected/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-kube-api-access-lk6g2\") pod \"redhat-marketplace-m4csf\" (UID: \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\") " pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.862821 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk6g2\" (UniqueName: \"kubernetes.io/projected/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-kube-api-access-lk6g2\") pod \"redhat-marketplace-m4csf\" (UID: \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\") " pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.863147 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-catalog-content\") pod \"redhat-marketplace-m4csf\" (UID: \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\") " pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.863304 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-utilities\") pod \"redhat-marketplace-m4csf\" (UID: \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\") " pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.863933 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-catalog-content\") pod \"redhat-marketplace-m4csf\" (UID: \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\") " pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.864067 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-utilities\") pod \"redhat-marketplace-m4csf\" (UID: \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\") " pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.900313 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk6g2\" (UniqueName: \"kubernetes.io/projected/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-kube-api-access-lk6g2\") pod \"redhat-marketplace-m4csf\" (UID: \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\") " pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:47 crc kubenswrapper[4757]: I0129 15:28:47.985537 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:48 crc kubenswrapper[4757]: I0129 15:28:48.414758 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4csf"] Jan 29 15:28:48 crc kubenswrapper[4757]: I0129 15:28:48.990433 4757 generic.go:334] "Generic (PLEG): container finished" podID="53afa6f6-5fe9-42e9-84e4-37ab48afff5e" containerID="fc1610d8e2fb4df9bdf7fbc36fc2669bc3a592edefdde6784d93522931e1388e" exitCode=0 Jan 29 15:28:48 crc kubenswrapper[4757]: I0129 15:28:48.990504 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4csf" event={"ID":"53afa6f6-5fe9-42e9-84e4-37ab48afff5e","Type":"ContainerDied","Data":"fc1610d8e2fb4df9bdf7fbc36fc2669bc3a592edefdde6784d93522931e1388e"} Jan 29 15:28:48 crc kubenswrapper[4757]: I0129 15:28:48.990734 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4csf" event={"ID":"53afa6f6-5fe9-42e9-84e4-37ab48afff5e","Type":"ContainerStarted","Data":"a3e82502885d0728cf8d059c28861af8358ad4e44c797951a0c3b2f3ccd971a5"} Jan 29 15:28:51 crc kubenswrapper[4757]: I0129 15:28:51.012193 4757 generic.go:334] "Generic (PLEG): container finished" podID="53afa6f6-5fe9-42e9-84e4-37ab48afff5e" containerID="2862de50c8b6d106199cbf78916e0965913f0b630a851d4066da169295422ba0" exitCode=0 Jan 29 15:28:51 crc kubenswrapper[4757]: I0129 15:28:51.012330 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4csf" event={"ID":"53afa6f6-5fe9-42e9-84e4-37ab48afff5e","Type":"ContainerDied","Data":"2862de50c8b6d106199cbf78916e0965913f0b630a851d4066da169295422ba0"} Jan 29 15:28:51 crc kubenswrapper[4757]: I0129 15:28:51.484438 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6d644c45b7-tjdph" Jan 29 15:28:52 crc kubenswrapper[4757]: I0129 15:28:52.020118 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4csf" event={"ID":"53afa6f6-5fe9-42e9-84e4-37ab48afff5e","Type":"ContainerStarted","Data":"c086df26803708483ac9ad4f872fc734281015f7a35a49ab0bc56fa46d5d661d"} Jan 29 15:28:52 crc kubenswrapper[4757]: I0129 15:28:52.045856 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m4csf" podStartSLOduration=2.577882787 podStartE2EDuration="5.045836072s" podCreationTimestamp="2026-01-29 15:28:47 +0000 UTC" firstStartedPulling="2026-01-29 15:28:48.992254691 +0000 UTC m=+1092.281504938" lastFinishedPulling="2026-01-29 15:28:51.460207986 +0000 UTC m=+1094.749458223" observedRunningTime="2026-01-29 15:28:52.043850884 +0000 UTC m=+1095.333101121" watchObservedRunningTime="2026-01-29 15:28:52.045836072 +0000 UTC m=+1095.335086309" Jan 29 15:28:57 crc kubenswrapper[4757]: I0129 15:28:57.985821 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:57 crc kubenswrapper[4757]: I0129 15:28:57.986154 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:58 crc kubenswrapper[4757]: I0129 15:28:58.022471 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:28:58 crc kubenswrapper[4757]: I0129 15:28:58.089070 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:29:00 crc kubenswrapper[4757]: I0129 15:29:00.253737 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4csf"] Jan 29 15:29:00 crc kubenswrapper[4757]: I0129 15:29:00.254068 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m4csf" podUID="53afa6f6-5fe9-42e9-84e4-37ab48afff5e" containerName="registry-server" containerID="cri-o://c086df26803708483ac9ad4f872fc734281015f7a35a49ab0bc56fa46d5d661d" gracePeriod=2 Jan 29 15:29:00 crc kubenswrapper[4757]: I0129 15:29:00.681148 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:29:00 crc kubenswrapper[4757]: I0129 15:29:00.785072 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-catalog-content\") pod \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\" (UID: \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\") " Jan 29 15:29:00 crc kubenswrapper[4757]: I0129 15:29:00.785175 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-utilities\") pod \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\" (UID: \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\") " Jan 29 15:29:00 crc kubenswrapper[4757]: I0129 15:29:00.785330 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk6g2\" (UniqueName: \"kubernetes.io/projected/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-kube-api-access-lk6g2\") pod \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\" (UID: \"53afa6f6-5fe9-42e9-84e4-37ab48afff5e\") " Jan 29 15:29:00 crc kubenswrapper[4757]: I0129 15:29:00.787068 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-utilities" (OuterVolumeSpecName: "utilities") pod "53afa6f6-5fe9-42e9-84e4-37ab48afff5e" (UID: "53afa6f6-5fe9-42e9-84e4-37ab48afff5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:29:00 crc kubenswrapper[4757]: I0129 15:29:00.792418 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-kube-api-access-lk6g2" (OuterVolumeSpecName: "kube-api-access-lk6g2") pod "53afa6f6-5fe9-42e9-84e4-37ab48afff5e" (UID: "53afa6f6-5fe9-42e9-84e4-37ab48afff5e"). InnerVolumeSpecName "kube-api-access-lk6g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:29:00 crc kubenswrapper[4757]: I0129 15:29:00.808651 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53afa6f6-5fe9-42e9-84e4-37ab48afff5e" (UID: "53afa6f6-5fe9-42e9-84e4-37ab48afff5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:29:00 crc kubenswrapper[4757]: I0129 15:29:00.886753 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:29:00 crc kubenswrapper[4757]: I0129 15:29:00.887128 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:29:00 crc kubenswrapper[4757]: I0129 15:29:00.887191 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk6g2\" (UniqueName: \"kubernetes.io/projected/53afa6f6-5fe9-42e9-84e4-37ab48afff5e-kube-api-access-lk6g2\") on node \"crc\" DevicePath \"\"" Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.073666 4757 generic.go:334] "Generic (PLEG): container finished" podID="53afa6f6-5fe9-42e9-84e4-37ab48afff5e" containerID="c086df26803708483ac9ad4f872fc734281015f7a35a49ab0bc56fa46d5d661d" exitCode=0 Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.073715 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4csf" event={"ID":"53afa6f6-5fe9-42e9-84e4-37ab48afff5e","Type":"ContainerDied","Data":"c086df26803708483ac9ad4f872fc734281015f7a35a49ab0bc56fa46d5d661d"} Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.073747 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4csf" event={"ID":"53afa6f6-5fe9-42e9-84e4-37ab48afff5e","Type":"ContainerDied","Data":"a3e82502885d0728cf8d059c28861af8358ad4e44c797951a0c3b2f3ccd971a5"} Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.073767 4757 scope.go:117] "RemoveContainer" containerID="c086df26803708483ac9ad4f872fc734281015f7a35a49ab0bc56fa46d5d661d" Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.073936 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4csf" Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.091146 4757 scope.go:117] "RemoveContainer" containerID="2862de50c8b6d106199cbf78916e0965913f0b630a851d4066da169295422ba0" Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.103763 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4csf"] Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.108228 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4csf"] Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.118856 4757 scope.go:117] "RemoveContainer" containerID="fc1610d8e2fb4df9bdf7fbc36fc2669bc3a592edefdde6784d93522931e1388e" Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.137439 4757 scope.go:117] "RemoveContainer" containerID="c086df26803708483ac9ad4f872fc734281015f7a35a49ab0bc56fa46d5d661d" Jan 29 15:29:01 crc kubenswrapper[4757]: E0129 15:29:01.137878 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c086df26803708483ac9ad4f872fc734281015f7a35a49ab0bc56fa46d5d661d\": container with ID starting with c086df26803708483ac9ad4f872fc734281015f7a35a49ab0bc56fa46d5d661d not found: ID does not exist" containerID="c086df26803708483ac9ad4f872fc734281015f7a35a49ab0bc56fa46d5d661d" Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.137917 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c086df26803708483ac9ad4f872fc734281015f7a35a49ab0bc56fa46d5d661d"} err="failed to get container status \"c086df26803708483ac9ad4f872fc734281015f7a35a49ab0bc56fa46d5d661d\": rpc error: code = NotFound desc = could not find container \"c086df26803708483ac9ad4f872fc734281015f7a35a49ab0bc56fa46d5d661d\": container with ID starting with c086df26803708483ac9ad4f872fc734281015f7a35a49ab0bc56fa46d5d661d not found: ID does not exist" Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.137942 4757 scope.go:117] "RemoveContainer" containerID="2862de50c8b6d106199cbf78916e0965913f0b630a851d4066da169295422ba0" Jan 29 15:29:01 crc kubenswrapper[4757]: E0129 15:29:01.138195 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2862de50c8b6d106199cbf78916e0965913f0b630a851d4066da169295422ba0\": container with ID starting with 2862de50c8b6d106199cbf78916e0965913f0b630a851d4066da169295422ba0 not found: ID does not exist" containerID="2862de50c8b6d106199cbf78916e0965913f0b630a851d4066da169295422ba0" Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.138219 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2862de50c8b6d106199cbf78916e0965913f0b630a851d4066da169295422ba0"} err="failed to get container status \"2862de50c8b6d106199cbf78916e0965913f0b630a851d4066da169295422ba0\": rpc error: code = NotFound desc = could not find container \"2862de50c8b6d106199cbf78916e0965913f0b630a851d4066da169295422ba0\": container with ID starting with 2862de50c8b6d106199cbf78916e0965913f0b630a851d4066da169295422ba0 not found: ID does not exist" Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.138236 4757 scope.go:117] "RemoveContainer" containerID="fc1610d8e2fb4df9bdf7fbc36fc2669bc3a592edefdde6784d93522931e1388e" Jan 29 15:29:01 crc kubenswrapper[4757]: E0129 15:29:01.138637 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc1610d8e2fb4df9bdf7fbc36fc2669bc3a592edefdde6784d93522931e1388e\": container with ID starting with fc1610d8e2fb4df9bdf7fbc36fc2669bc3a592edefdde6784d93522931e1388e not found: ID does not exist" containerID="fc1610d8e2fb4df9bdf7fbc36fc2669bc3a592edefdde6784d93522931e1388e" Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.138657 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc1610d8e2fb4df9bdf7fbc36fc2669bc3a592edefdde6784d93522931e1388e"} err="failed to get container status \"fc1610d8e2fb4df9bdf7fbc36fc2669bc3a592edefdde6784d93522931e1388e\": rpc error: code = NotFound desc = could not find container \"fc1610d8e2fb4df9bdf7fbc36fc2669bc3a592edefdde6784d93522931e1388e\": container with ID starting with fc1610d8e2fb4df9bdf7fbc36fc2669bc3a592edefdde6784d93522931e1388e not found: ID does not exist" Jan 29 15:29:01 crc kubenswrapper[4757]: I0129 15:29:01.403015 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53afa6f6-5fe9-42e9-84e4-37ab48afff5e" path="/var/lib/kubelet/pods/53afa6f6-5fe9-42e9-84e4-37ab48afff5e/volumes" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.107431 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5c94d76d46-599j4" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.749946 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9dgf4"] Jan 29 15:29:11 crc kubenswrapper[4757]: E0129 15:29:11.750525 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53afa6f6-5fe9-42e9-84e4-37ab48afff5e" containerName="registry-server" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.750549 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="53afa6f6-5fe9-42e9-84e4-37ab48afff5e" containerName="registry-server" Jan 29 15:29:11 crc kubenswrapper[4757]: E0129 15:29:11.750561 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53afa6f6-5fe9-42e9-84e4-37ab48afff5e" containerName="extract-content" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.750569 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="53afa6f6-5fe9-42e9-84e4-37ab48afff5e" containerName="extract-content" Jan 29 15:29:11 crc kubenswrapper[4757]: E0129 15:29:11.750578 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53afa6f6-5fe9-42e9-84e4-37ab48afff5e" containerName="extract-utilities" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.750586 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="53afa6f6-5fe9-42e9-84e4-37ab48afff5e" containerName="extract-utilities" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.750716 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="53afa6f6-5fe9-42e9-84e4-37ab48afff5e" containerName="registry-server" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.752790 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.755634 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-n2hrk" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.757256 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.768763 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd"] Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.769648 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.774288 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.775232 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.806989 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd"] Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.868640 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6xltj"] Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.869815 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6xltj" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.874759 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.876352 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.876379 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.880094 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-wdbwp" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.893588 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-sll65"] Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.894468 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.897176 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.914337 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-sll65"] Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.926799 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjb9l\" (UniqueName: \"kubernetes.io/projected/200c0920-028c-4895-a093-edf9ee940c1f-kube-api-access-fjb9l\") pod \"frr-k8s-webhook-server-7df86c4f6c-w97wd\" (UID: \"200c0920-028c-4895-a093-edf9ee940c1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.926843 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6bf723f3-fad1-4294-824a-97b5c64953d5-frr-startup\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.926898 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwk6q\" (UniqueName: \"kubernetes.io/projected/6bf723f3-fad1-4294-824a-97b5c64953d5-kube-api-access-kwk6q\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.926920 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6bf723f3-fad1-4294-824a-97b5c64953d5-reloader\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.926934 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/200c0920-028c-4895-a093-edf9ee940c1f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-w97wd\" (UID: \"200c0920-028c-4895-a093-edf9ee940c1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.926951 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6bf723f3-fad1-4294-824a-97b5c64953d5-frr-sockets\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.926974 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6bf723f3-fad1-4294-824a-97b5c64953d5-metrics\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.927002 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6bf723f3-fad1-4294-824a-97b5c64953d5-frr-conf\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:11 crc kubenswrapper[4757]: I0129 15:29:11.927018 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bf723f3-fad1-4294-824a-97b5c64953d5-metrics-certs\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.028713 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwk6q\" (UniqueName: \"kubernetes.io/projected/6bf723f3-fad1-4294-824a-97b5c64953d5-kube-api-access-kwk6q\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.028822 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/31de118f-e4a8-488b-91a9-470c6cdc900c-cert\") pod \"controller-6968d8fdc4-sll65\" (UID: \"31de118f-e4a8-488b-91a9-470c6cdc900c\") " pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.028928 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6bf723f3-fad1-4294-824a-97b5c64953d5-reloader\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.028954 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/200c0920-028c-4895-a093-edf9ee940c1f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-w97wd\" (UID: \"200c0920-028c-4895-a093-edf9ee940c1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029003 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6bf723f3-fad1-4294-824a-97b5c64953d5-frr-sockets\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: E0129 15:29:12.029120 4757 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 29 15:29:12 crc kubenswrapper[4757]: E0129 15:29:12.029171 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/200c0920-028c-4895-a093-edf9ee940c1f-cert podName:200c0920-028c-4895-a093-edf9ee940c1f nodeName:}" failed. No retries permitted until 2026-01-29 15:29:12.529150075 +0000 UTC m=+1115.818400322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/200c0920-028c-4895-a093-edf9ee940c1f-cert") pod "frr-k8s-webhook-server-7df86c4f6c-w97wd" (UID: "200c0920-028c-4895-a093-edf9ee940c1f") : secret "frr-k8s-webhook-server-cert" not found Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029026 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31de118f-e4a8-488b-91a9-470c6cdc900c-metrics-certs\") pod \"controller-6968d8fdc4-sll65\" (UID: \"31de118f-e4a8-488b-91a9-470c6cdc900c\") " pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029249 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6bf723f3-fad1-4294-824a-97b5c64953d5-metrics\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029308 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e131102b-c200-45ff-a236-9b2cd0435f88-metallb-excludel2\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029345 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6bf723f3-fad1-4294-824a-97b5c64953d5-reloader\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029351 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6bf723f3-fad1-4294-824a-97b5c64953d5-frr-conf\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029415 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnchn\" (UniqueName: \"kubernetes.io/projected/e131102b-c200-45ff-a236-9b2cd0435f88-kube-api-access-bnchn\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029446 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bf723f3-fad1-4294-824a-97b5c64953d5-metrics-certs\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029484 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjb9l\" (UniqueName: \"kubernetes.io/projected/200c0920-028c-4895-a093-edf9ee940c1f-kube-api-access-fjb9l\") pod \"frr-k8s-webhook-server-7df86c4f6c-w97wd\" (UID: \"200c0920-028c-4895-a093-edf9ee940c1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029506 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6bf723f3-fad1-4294-824a-97b5c64953d5-frr-startup\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029574 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e131102b-c200-45ff-a236-9b2cd0435f88-memberlist\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029595 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e131102b-c200-45ff-a236-9b2cd0435f88-metrics-certs\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029491 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6bf723f3-fad1-4294-824a-97b5c64953d5-frr-sockets\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029627 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6bf723f3-fad1-4294-824a-97b5c64953d5-metrics\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.029680 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlt7l\" (UniqueName: \"kubernetes.io/projected/31de118f-e4a8-488b-91a9-470c6cdc900c-kube-api-access-hlt7l\") pod \"controller-6968d8fdc4-sll65\" (UID: \"31de118f-e4a8-488b-91a9-470c6cdc900c\") " pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.030740 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6bf723f3-fad1-4294-824a-97b5c64953d5-frr-startup\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.031816 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6bf723f3-fad1-4294-824a-97b5c64953d5-frr-conf\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.043794 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bf723f3-fad1-4294-824a-97b5c64953d5-metrics-certs\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.052071 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwk6q\" (UniqueName: \"kubernetes.io/projected/6bf723f3-fad1-4294-824a-97b5c64953d5-kube-api-access-kwk6q\") pod \"frr-k8s-9dgf4\" (UID: \"6bf723f3-fad1-4294-824a-97b5c64953d5\") " pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.066092 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjb9l\" (UniqueName: \"kubernetes.io/projected/200c0920-028c-4895-a093-edf9ee940c1f-kube-api-access-fjb9l\") pod \"frr-k8s-webhook-server-7df86c4f6c-w97wd\" (UID: \"200c0920-028c-4895-a093-edf9ee940c1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.080731 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.131180 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e131102b-c200-45ff-a236-9b2cd0435f88-memberlist\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.131229 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e131102b-c200-45ff-a236-9b2cd0435f88-metrics-certs\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.131261 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlt7l\" (UniqueName: \"kubernetes.io/projected/31de118f-e4a8-488b-91a9-470c6cdc900c-kube-api-access-hlt7l\") pod \"controller-6968d8fdc4-sll65\" (UID: \"31de118f-e4a8-488b-91a9-470c6cdc900c\") " pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.131323 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/31de118f-e4a8-488b-91a9-470c6cdc900c-cert\") pod \"controller-6968d8fdc4-sll65\" (UID: \"31de118f-e4a8-488b-91a9-470c6cdc900c\") " pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.131364 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31de118f-e4a8-488b-91a9-470c6cdc900c-metrics-certs\") pod \"controller-6968d8fdc4-sll65\" (UID: \"31de118f-e4a8-488b-91a9-470c6cdc900c\") " pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:12 crc kubenswrapper[4757]: E0129 15:29:12.131368 4757 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.131395 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e131102b-c200-45ff-a236-9b2cd0435f88-metallb-excludel2\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.131422 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnchn\" (UniqueName: \"kubernetes.io/projected/e131102b-c200-45ff-a236-9b2cd0435f88-kube-api-access-bnchn\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:12 crc kubenswrapper[4757]: E0129 15:29:12.131448 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e131102b-c200-45ff-a236-9b2cd0435f88-memberlist podName:e131102b-c200-45ff-a236-9b2cd0435f88 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:12.63142431 +0000 UTC m=+1115.920674647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e131102b-c200-45ff-a236-9b2cd0435f88-memberlist") pod "speaker-6xltj" (UID: "e131102b-c200-45ff-a236-9b2cd0435f88") : secret "metallb-memberlist" not found Jan 29 15:29:12 crc kubenswrapper[4757]: E0129 15:29:12.131889 4757 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 29 15:29:12 crc kubenswrapper[4757]: E0129 15:29:12.131939 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31de118f-e4a8-488b-91a9-470c6cdc900c-metrics-certs podName:31de118f-e4a8-488b-91a9-470c6cdc900c nodeName:}" failed. No retries permitted until 2026-01-29 15:29:12.631922364 +0000 UTC m=+1115.921172691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/31de118f-e4a8-488b-91a9-470c6cdc900c-metrics-certs") pod "controller-6968d8fdc4-sll65" (UID: "31de118f-e4a8-488b-91a9-470c6cdc900c") : secret "controller-certs-secret" not found Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.132664 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e131102b-c200-45ff-a236-9b2cd0435f88-metallb-excludel2\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.138567 4757 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.138818 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e131102b-c200-45ff-a236-9b2cd0435f88-metrics-certs\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.145751 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/31de118f-e4a8-488b-91a9-470c6cdc900c-cert\") pod \"controller-6968d8fdc4-sll65\" (UID: \"31de118f-e4a8-488b-91a9-470c6cdc900c\") " pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.156061 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlt7l\" (UniqueName: \"kubernetes.io/projected/31de118f-e4a8-488b-91a9-470c6cdc900c-kube-api-access-hlt7l\") pod \"controller-6968d8fdc4-sll65\" (UID: \"31de118f-e4a8-488b-91a9-470c6cdc900c\") " pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.162119 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnchn\" (UniqueName: \"kubernetes.io/projected/e131102b-c200-45ff-a236-9b2cd0435f88-kube-api-access-bnchn\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.536543 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/200c0920-028c-4895-a093-edf9ee940c1f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-w97wd\" (UID: \"200c0920-028c-4895-a093-edf9ee940c1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.539571 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/200c0920-028c-4895-a093-edf9ee940c1f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-w97wd\" (UID: \"200c0920-028c-4895-a093-edf9ee940c1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.637917 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31de118f-e4a8-488b-91a9-470c6cdc900c-metrics-certs\") pod \"controller-6968d8fdc4-sll65\" (UID: \"31de118f-e4a8-488b-91a9-470c6cdc900c\") " pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.637999 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e131102b-c200-45ff-a236-9b2cd0435f88-memberlist\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:12 crc kubenswrapper[4757]: E0129 15:29:12.638130 4757 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 15:29:12 crc kubenswrapper[4757]: E0129 15:29:12.638183 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e131102b-c200-45ff-a236-9b2cd0435f88-memberlist podName:e131102b-c200-45ff-a236-9b2cd0435f88 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:13.63816807 +0000 UTC m=+1116.927418307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e131102b-c200-45ff-a236-9b2cd0435f88-memberlist") pod "speaker-6xltj" (UID: "e131102b-c200-45ff-a236-9b2cd0435f88") : secret "metallb-memberlist" not found Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.641875 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31de118f-e4a8-488b-91a9-470c6cdc900c-metrics-certs\") pod \"controller-6968d8fdc4-sll65\" (UID: \"31de118f-e4a8-488b-91a9-470c6cdc900c\") " pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.688537 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.811102 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:12 crc kubenswrapper[4757]: I0129 15:29:12.911738 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd"] Jan 29 15:29:13 crc kubenswrapper[4757]: I0129 15:29:13.021570 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-sll65"] Jan 29 15:29:13 crc kubenswrapper[4757]: W0129 15:29:13.027027 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31de118f_e4a8_488b_91a9_470c6cdc900c.slice/crio-1f5bb249e8b68b0a72c652660ef85125421fcd9f3f4a9762da5b049d28e7002c WatchSource:0}: Error finding container 1f5bb249e8b68b0a72c652660ef85125421fcd9f3f4a9762da5b049d28e7002c: Status 404 returned error can't find the container with id 1f5bb249e8b68b0a72c652660ef85125421fcd9f3f4a9762da5b049d28e7002c Jan 29 15:29:13 crc kubenswrapper[4757]: I0129 15:29:13.132724 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9dgf4" event={"ID":"6bf723f3-fad1-4294-824a-97b5c64953d5","Type":"ContainerStarted","Data":"5658f6b954168cd5692db21b5426362add3b142e1f8e6c6223156d8c2d888288"} Jan 29 15:29:13 crc kubenswrapper[4757]: I0129 15:29:13.135573 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-sll65" event={"ID":"31de118f-e4a8-488b-91a9-470c6cdc900c","Type":"ContainerStarted","Data":"1f5bb249e8b68b0a72c652660ef85125421fcd9f3f4a9762da5b049d28e7002c"} Jan 29 15:29:13 crc kubenswrapper[4757]: I0129 15:29:13.136855 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" event={"ID":"200c0920-028c-4895-a093-edf9ee940c1f","Type":"ContainerStarted","Data":"aff19fd9f66f5b796d60a3799c290a8a2f52c724ce9907d8bc2ab83fb565af8c"} Jan 29 15:29:13 crc kubenswrapper[4757]: I0129 15:29:13.652834 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e131102b-c200-45ff-a236-9b2cd0435f88-memberlist\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:13 crc kubenswrapper[4757]: I0129 15:29:13.664206 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e131102b-c200-45ff-a236-9b2cd0435f88-memberlist\") pod \"speaker-6xltj\" (UID: \"e131102b-c200-45ff-a236-9b2cd0435f88\") " pod="metallb-system/speaker-6xltj" Jan 29 15:29:13 crc kubenswrapper[4757]: I0129 15:29:13.686420 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6xltj" Jan 29 15:29:13 crc kubenswrapper[4757]: W0129 15:29:13.703465 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode131102b_c200_45ff_a236_9b2cd0435f88.slice/crio-2e8c0e2160230becf20e42d70a3a3f4c93a4f1c7c66e6d9c7ab9b00477edb506 WatchSource:0}: Error finding container 2e8c0e2160230becf20e42d70a3a3f4c93a4f1c7c66e6d9c7ab9b00477edb506: Status 404 returned error can't find the container with id 2e8c0e2160230becf20e42d70a3a3f4c93a4f1c7c66e6d9c7ab9b00477edb506 Jan 29 15:29:14 crc kubenswrapper[4757]: I0129 15:29:14.146737 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6xltj" event={"ID":"e131102b-c200-45ff-a236-9b2cd0435f88","Type":"ContainerStarted","Data":"ff4c7a84072ccb57b6df460bf9078ef5d5f2eb2eeeb6112b38110df3696824d0"} Jan 29 15:29:14 crc kubenswrapper[4757]: I0129 15:29:14.146783 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6xltj" event={"ID":"e131102b-c200-45ff-a236-9b2cd0435f88","Type":"ContainerStarted","Data":"2e8c0e2160230becf20e42d70a3a3f4c93a4f1c7c66e6d9c7ab9b00477edb506"} Jan 29 15:29:14 crc kubenswrapper[4757]: I0129 15:29:14.149228 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-sll65" event={"ID":"31de118f-e4a8-488b-91a9-470c6cdc900c","Type":"ContainerStarted","Data":"e91af52616d1edd2e4e9675868f147a6ca20e09e5afa1443714534686fe7de99"} Jan 29 15:29:14 crc kubenswrapper[4757]: I0129 15:29:14.149258 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-sll65" event={"ID":"31de118f-e4a8-488b-91a9-470c6cdc900c","Type":"ContainerStarted","Data":"e7b833b6dde183a7502dbbe05fc540e22e658e67dc5c770e20bb031d137a1d9e"} Jan 29 15:29:14 crc kubenswrapper[4757]: I0129 15:29:14.149433 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:14 crc kubenswrapper[4757]: I0129 15:29:14.182484 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-sll65" podStartSLOduration=3.182464108 podStartE2EDuration="3.182464108s" podCreationTimestamp="2026-01-29 15:29:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:14.173825108 +0000 UTC m=+1117.463075345" watchObservedRunningTime="2026-01-29 15:29:14.182464108 +0000 UTC m=+1117.471714355" Jan 29 15:29:15 crc kubenswrapper[4757]: I0129 15:29:15.159634 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6xltj" event={"ID":"e131102b-c200-45ff-a236-9b2cd0435f88","Type":"ContainerStarted","Data":"aa350199fdb0cee873785fa261cfe8603f7f37c0c7969f61cdd72422bd60472b"} Jan 29 15:29:15 crc kubenswrapper[4757]: I0129 15:29:15.159988 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6xltj" Jan 29 15:29:15 crc kubenswrapper[4757]: I0129 15:29:15.179562 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6xltj" podStartSLOduration=4.179540533 podStartE2EDuration="4.179540533s" podCreationTimestamp="2026-01-29 15:29:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:29:15.176459384 +0000 UTC m=+1118.465709631" watchObservedRunningTime="2026-01-29 15:29:15.179540533 +0000 UTC m=+1118.468790770" Jan 29 15:29:17 crc kubenswrapper[4757]: I0129 15:29:17.604634 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:29:17 crc kubenswrapper[4757]: I0129 15:29:17.604903 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:29:21 crc kubenswrapper[4757]: I0129 15:29:21.209140 4757 generic.go:334] "Generic (PLEG): container finished" podID="6bf723f3-fad1-4294-824a-97b5c64953d5" containerID="ffe5afafd7f7a2fda2d2bd07d9800afacc901d19c6c82323f7d3cd2ee1f2d258" exitCode=0 Jan 29 15:29:21 crc kubenswrapper[4757]: I0129 15:29:21.209229 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9dgf4" event={"ID":"6bf723f3-fad1-4294-824a-97b5c64953d5","Type":"ContainerDied","Data":"ffe5afafd7f7a2fda2d2bd07d9800afacc901d19c6c82323f7d3cd2ee1f2d258"} Jan 29 15:29:21 crc kubenswrapper[4757]: I0129 15:29:21.212546 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" event={"ID":"200c0920-028c-4895-a093-edf9ee940c1f","Type":"ContainerStarted","Data":"042d0ce1b3d3d77d09ea8aae19341bb2012438e87692175792308d2edd5e634d"} Jan 29 15:29:21 crc kubenswrapper[4757]: I0129 15:29:21.212706 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" Jan 29 15:29:21 crc kubenswrapper[4757]: I0129 15:29:21.277736 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" podStartSLOduration=2.634856703 podStartE2EDuration="10.277718125s" podCreationTimestamp="2026-01-29 15:29:11 +0000 UTC" firstStartedPulling="2026-01-29 15:29:12.943524622 +0000 UTC m=+1116.232774859" lastFinishedPulling="2026-01-29 15:29:20.586386044 +0000 UTC m=+1123.875636281" observedRunningTime="2026-01-29 15:29:21.27753301 +0000 UTC m=+1124.566783247" watchObservedRunningTime="2026-01-29 15:29:21.277718125 +0000 UTC m=+1124.566968382" Jan 29 15:29:22 crc kubenswrapper[4757]: I0129 15:29:22.220777 4757 generic.go:334] "Generic (PLEG): container finished" podID="6bf723f3-fad1-4294-824a-97b5c64953d5" containerID="4560e69da67a363964e273322e4c4f4388ff8b7b6d933161bc2013ae095ea433" exitCode=0 Jan 29 15:29:22 crc kubenswrapper[4757]: I0129 15:29:22.220842 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9dgf4" event={"ID":"6bf723f3-fad1-4294-824a-97b5c64953d5","Type":"ContainerDied","Data":"4560e69da67a363964e273322e4c4f4388ff8b7b6d933161bc2013ae095ea433"} Jan 29 15:29:23 crc kubenswrapper[4757]: I0129 15:29:23.229116 4757 generic.go:334] "Generic (PLEG): container finished" podID="6bf723f3-fad1-4294-824a-97b5c64953d5" containerID="c779a735e2f9230872426cb55803fc013143ba66b94e3cc2620fd041bca21655" exitCode=0 Jan 29 15:29:23 crc kubenswrapper[4757]: I0129 15:29:23.229170 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9dgf4" event={"ID":"6bf723f3-fad1-4294-824a-97b5c64953d5","Type":"ContainerDied","Data":"c779a735e2f9230872426cb55803fc013143ba66b94e3cc2620fd041bca21655"} Jan 29 15:29:23 crc kubenswrapper[4757]: I0129 15:29:23.690330 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6xltj" Jan 29 15:29:24 crc kubenswrapper[4757]: I0129 15:29:24.241577 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9dgf4" event={"ID":"6bf723f3-fad1-4294-824a-97b5c64953d5","Type":"ContainerStarted","Data":"32ffc355ffbd388b0057566d5245158caa76f85e0ab129bb98974008e1370f99"} Jan 29 15:29:24 crc kubenswrapper[4757]: I0129 15:29:24.241622 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9dgf4" event={"ID":"6bf723f3-fad1-4294-824a-97b5c64953d5","Type":"ContainerStarted","Data":"9952b5ade82a618fa6d9dfb99cd36c95310310dcd1fb42e8122146a6bbf1ab98"} Jan 29 15:29:24 crc kubenswrapper[4757]: I0129 15:29:24.241635 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9dgf4" event={"ID":"6bf723f3-fad1-4294-824a-97b5c64953d5","Type":"ContainerStarted","Data":"026a29af44bf7441000c58363508f50ee9c9422c02d6cbef2955f005bf14f93e"} Jan 29 15:29:24 crc kubenswrapper[4757]: I0129 15:29:24.241645 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9dgf4" event={"ID":"6bf723f3-fad1-4294-824a-97b5c64953d5","Type":"ContainerStarted","Data":"df372081fcec2096f00b067a877f01a08045b65b9fc3b863b79dbe046668dc68"} Jan 29 15:29:25 crc kubenswrapper[4757]: I0129 15:29:25.251209 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9dgf4" event={"ID":"6bf723f3-fad1-4294-824a-97b5c64953d5","Type":"ContainerStarted","Data":"8c6537ed7ab0a423058ee6192d59c3c59d6e27aa2324856e545300f50bbff2de"} Jan 29 15:29:25 crc kubenswrapper[4757]: I0129 15:29:25.251515 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:25 crc kubenswrapper[4757]: I0129 15:29:25.251528 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9dgf4" event={"ID":"6bf723f3-fad1-4294-824a-97b5c64953d5","Type":"ContainerStarted","Data":"55f3ea46f3af5cf10b68b505d190b47a128e0f939040dc368cef0a6183145e32"} Jan 29 15:29:25 crc kubenswrapper[4757]: I0129 15:29:25.284500 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9dgf4" podStartSLOduration=5.984076235 podStartE2EDuration="14.284484998s" podCreationTimestamp="2026-01-29 15:29:11 +0000 UTC" firstStartedPulling="2026-01-29 15:29:12.264080166 +0000 UTC m=+1115.553330403" lastFinishedPulling="2026-01-29 15:29:20.564488929 +0000 UTC m=+1123.853739166" observedRunningTime="2026-01-29 15:29:25.280172633 +0000 UTC m=+1128.569422870" watchObservedRunningTime="2026-01-29 15:29:25.284484998 +0000 UTC m=+1128.573735235" Jan 29 15:29:26 crc kubenswrapper[4757]: I0129 15:29:26.482389 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7v6sm"] Jan 29 15:29:26 crc kubenswrapper[4757]: I0129 15:29:26.483158 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7v6sm" Jan 29 15:29:26 crc kubenswrapper[4757]: I0129 15:29:26.484913 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-4nw72" Jan 29 15:29:26 crc kubenswrapper[4757]: I0129 15:29:26.485609 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 15:29:26 crc kubenswrapper[4757]: I0129 15:29:26.485803 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 15:29:26 crc kubenswrapper[4757]: I0129 15:29:26.500238 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7v6sm"] Jan 29 15:29:26 crc kubenswrapper[4757]: I0129 15:29:26.549975 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v28l9\" (UniqueName: \"kubernetes.io/projected/5fdc16ee-9212-4598-a14a-826a0558a931-kube-api-access-v28l9\") pod \"openstack-operator-index-7v6sm\" (UID: \"5fdc16ee-9212-4598-a14a-826a0558a931\") " pod="openstack-operators/openstack-operator-index-7v6sm" Jan 29 15:29:26 crc kubenswrapper[4757]: I0129 15:29:26.651650 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v28l9\" (UniqueName: \"kubernetes.io/projected/5fdc16ee-9212-4598-a14a-826a0558a931-kube-api-access-v28l9\") pod \"openstack-operator-index-7v6sm\" (UID: \"5fdc16ee-9212-4598-a14a-826a0558a931\") " pod="openstack-operators/openstack-operator-index-7v6sm" Jan 29 15:29:26 crc kubenswrapper[4757]: I0129 15:29:26.683855 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v28l9\" (UniqueName: \"kubernetes.io/projected/5fdc16ee-9212-4598-a14a-826a0558a931-kube-api-access-v28l9\") pod \"openstack-operator-index-7v6sm\" (UID: \"5fdc16ee-9212-4598-a14a-826a0558a931\") " pod="openstack-operators/openstack-operator-index-7v6sm" Jan 29 15:29:26 crc kubenswrapper[4757]: I0129 15:29:26.798205 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7v6sm" Jan 29 15:29:27 crc kubenswrapper[4757]: I0129 15:29:27.081679 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:27 crc kubenswrapper[4757]: I0129 15:29:27.243962 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:27 crc kubenswrapper[4757]: I0129 15:29:27.244002 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7v6sm"] Jan 29 15:29:27 crc kubenswrapper[4757]: I0129 15:29:27.290748 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7v6sm" event={"ID":"5fdc16ee-9212-4598-a14a-826a0558a931","Type":"ContainerStarted","Data":"70a54c876242e86306a5ac2d19b02983331cd9e9baadcc82a87d94114a52b432"} Jan 29 15:29:29 crc kubenswrapper[4757]: I0129 15:29:29.653065 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-7v6sm"] Jan 29 15:29:30 crc kubenswrapper[4757]: I0129 15:29:30.261387 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xsflf"] Jan 29 15:29:30 crc kubenswrapper[4757]: I0129 15:29:30.262132 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xsflf" Jan 29 15:29:30 crc kubenswrapper[4757]: I0129 15:29:30.285629 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xsflf"] Jan 29 15:29:30 crc kubenswrapper[4757]: I0129 15:29:30.435557 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8wk7\" (UniqueName: \"kubernetes.io/projected/4a4af715-ebf7-4ad7-a1ff-7b3a4a90512a-kube-api-access-j8wk7\") pod \"openstack-operator-index-xsflf\" (UID: \"4a4af715-ebf7-4ad7-a1ff-7b3a4a90512a\") " pod="openstack-operators/openstack-operator-index-xsflf" Jan 29 15:29:30 crc kubenswrapper[4757]: I0129 15:29:30.537186 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8wk7\" (UniqueName: \"kubernetes.io/projected/4a4af715-ebf7-4ad7-a1ff-7b3a4a90512a-kube-api-access-j8wk7\") pod \"openstack-operator-index-xsflf\" (UID: \"4a4af715-ebf7-4ad7-a1ff-7b3a4a90512a\") " pod="openstack-operators/openstack-operator-index-xsflf" Jan 29 15:29:30 crc kubenswrapper[4757]: I0129 15:29:30.572941 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8wk7\" (UniqueName: \"kubernetes.io/projected/4a4af715-ebf7-4ad7-a1ff-7b3a4a90512a-kube-api-access-j8wk7\") pod \"openstack-operator-index-xsflf\" (UID: \"4a4af715-ebf7-4ad7-a1ff-7b3a4a90512a\") " pod="openstack-operators/openstack-operator-index-xsflf" Jan 29 15:29:30 crc kubenswrapper[4757]: I0129 15:29:30.590137 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xsflf" Jan 29 15:29:30 crc kubenswrapper[4757]: I0129 15:29:30.769051 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xsflf"] Jan 29 15:29:31 crc kubenswrapper[4757]: I0129 15:29:31.322246 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7v6sm" event={"ID":"5fdc16ee-9212-4598-a14a-826a0558a931","Type":"ContainerStarted","Data":"52cadb9c4263d78989302ecf19dae3921e931d70d2fbee8235baaa6cffdbfc0e"} Jan 29 15:29:31 crc kubenswrapper[4757]: I0129 15:29:31.322351 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-7v6sm" podUID="5fdc16ee-9212-4598-a14a-826a0558a931" containerName="registry-server" containerID="cri-o://52cadb9c4263d78989302ecf19dae3921e931d70d2fbee8235baaa6cffdbfc0e" gracePeriod=2 Jan 29 15:29:31 crc kubenswrapper[4757]: I0129 15:29:31.323911 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xsflf" event={"ID":"4a4af715-ebf7-4ad7-a1ff-7b3a4a90512a","Type":"ContainerStarted","Data":"865f0ecf7eda7e6d54707c1a9814307d4e4134f010e1d370387e8abdb877ce6b"} Jan 29 15:29:31 crc kubenswrapper[4757]: I0129 15:29:31.323950 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xsflf" event={"ID":"4a4af715-ebf7-4ad7-a1ff-7b3a4a90512a","Type":"ContainerStarted","Data":"aff0c1f0f88c0ee9d63fd5548af9306cef5de8ac64b1a5e9390930a35af872ac"} Jan 29 15:29:31 crc kubenswrapper[4757]: I0129 15:29:31.341563 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7v6sm" podStartSLOduration=2.434477854 podStartE2EDuration="5.341540178s" podCreationTimestamp="2026-01-29 15:29:26 +0000 UTC" firstStartedPulling="2026-01-29 15:29:27.252662535 +0000 UTC m=+1130.541912772" lastFinishedPulling="2026-01-29 15:29:30.159724859 +0000 UTC m=+1133.448975096" observedRunningTime="2026-01-29 15:29:31.338583033 +0000 UTC m=+1134.627833280" watchObservedRunningTime="2026-01-29 15:29:31.341540178 +0000 UTC m=+1134.630790415" Jan 29 15:29:31 crc kubenswrapper[4757]: I0129 15:29:31.365654 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xsflf" podStartSLOduration=1.311563069 podStartE2EDuration="1.365635187s" podCreationTimestamp="2026-01-29 15:29:30 +0000 UTC" firstStartedPulling="2026-01-29 15:29:30.778002902 +0000 UTC m=+1134.067253139" lastFinishedPulling="2026-01-29 15:29:30.83207502 +0000 UTC m=+1134.121325257" observedRunningTime="2026-01-29 15:29:31.361479956 +0000 UTC m=+1134.650730193" watchObservedRunningTime="2026-01-29 15:29:31.365635187 +0000 UTC m=+1134.654885444" Jan 29 15:29:31 crc kubenswrapper[4757]: I0129 15:29:31.692709 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7v6sm" Jan 29 15:29:31 crc kubenswrapper[4757]: I0129 15:29:31.858464 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v28l9\" (UniqueName: \"kubernetes.io/projected/5fdc16ee-9212-4598-a14a-826a0558a931-kube-api-access-v28l9\") pod \"5fdc16ee-9212-4598-a14a-826a0558a931\" (UID: \"5fdc16ee-9212-4598-a14a-826a0558a931\") " Jan 29 15:29:31 crc kubenswrapper[4757]: I0129 15:29:31.872603 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fdc16ee-9212-4598-a14a-826a0558a931-kube-api-access-v28l9" (OuterVolumeSpecName: "kube-api-access-v28l9") pod "5fdc16ee-9212-4598-a14a-826a0558a931" (UID: "5fdc16ee-9212-4598-a14a-826a0558a931"). InnerVolumeSpecName "kube-api-access-v28l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:29:31 crc kubenswrapper[4757]: I0129 15:29:31.959905 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v28l9\" (UniqueName: \"kubernetes.io/projected/5fdc16ee-9212-4598-a14a-826a0558a931-kube-api-access-v28l9\") on node \"crc\" DevicePath \"\"" Jan 29 15:29:32 crc kubenswrapper[4757]: I0129 15:29:32.331388 4757 generic.go:334] "Generic (PLEG): container finished" podID="5fdc16ee-9212-4598-a14a-826a0558a931" containerID="52cadb9c4263d78989302ecf19dae3921e931d70d2fbee8235baaa6cffdbfc0e" exitCode=0 Jan 29 15:29:32 crc kubenswrapper[4757]: I0129 15:29:32.331470 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7v6sm" Jan 29 15:29:32 crc kubenswrapper[4757]: I0129 15:29:32.331523 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7v6sm" event={"ID":"5fdc16ee-9212-4598-a14a-826a0558a931","Type":"ContainerDied","Data":"52cadb9c4263d78989302ecf19dae3921e931d70d2fbee8235baaa6cffdbfc0e"} Jan 29 15:29:32 crc kubenswrapper[4757]: I0129 15:29:32.331562 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7v6sm" event={"ID":"5fdc16ee-9212-4598-a14a-826a0558a931","Type":"ContainerDied","Data":"70a54c876242e86306a5ac2d19b02983331cd9e9baadcc82a87d94114a52b432"} Jan 29 15:29:32 crc kubenswrapper[4757]: I0129 15:29:32.331591 4757 scope.go:117] "RemoveContainer" containerID="52cadb9c4263d78989302ecf19dae3921e931d70d2fbee8235baaa6cffdbfc0e" Jan 29 15:29:32 crc kubenswrapper[4757]: I0129 15:29:32.351582 4757 scope.go:117] "RemoveContainer" containerID="52cadb9c4263d78989302ecf19dae3921e931d70d2fbee8235baaa6cffdbfc0e" Jan 29 15:29:32 crc kubenswrapper[4757]: E0129 15:29:32.352243 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52cadb9c4263d78989302ecf19dae3921e931d70d2fbee8235baaa6cffdbfc0e\": container with ID starting with 52cadb9c4263d78989302ecf19dae3921e931d70d2fbee8235baaa6cffdbfc0e not found: ID does not exist" containerID="52cadb9c4263d78989302ecf19dae3921e931d70d2fbee8235baaa6cffdbfc0e" Jan 29 15:29:32 crc kubenswrapper[4757]: I0129 15:29:32.352306 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52cadb9c4263d78989302ecf19dae3921e931d70d2fbee8235baaa6cffdbfc0e"} err="failed to get container status \"52cadb9c4263d78989302ecf19dae3921e931d70d2fbee8235baaa6cffdbfc0e\": rpc error: code = NotFound desc = could not find container \"52cadb9c4263d78989302ecf19dae3921e931d70d2fbee8235baaa6cffdbfc0e\": container with ID starting with 52cadb9c4263d78989302ecf19dae3921e931d70d2fbee8235baaa6cffdbfc0e not found: ID does not exist" Jan 29 15:29:32 crc kubenswrapper[4757]: I0129 15:29:32.366584 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-7v6sm"] Jan 29 15:29:32 crc kubenswrapper[4757]: I0129 15:29:32.370580 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-7v6sm"] Jan 29 15:29:32 crc kubenswrapper[4757]: I0129 15:29:32.694366 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-w97wd" Jan 29 15:29:32 crc kubenswrapper[4757]: I0129 15:29:32.814859 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-sll65" Jan 29 15:29:33 crc kubenswrapper[4757]: I0129 15:29:33.402913 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fdc16ee-9212-4598-a14a-826a0558a931" path="/var/lib/kubelet/pods/5fdc16ee-9212-4598-a14a-826a0558a931/volumes" Jan 29 15:29:40 crc kubenswrapper[4757]: I0129 15:29:40.590293 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xsflf" Jan 29 15:29:40 crc kubenswrapper[4757]: I0129 15:29:40.590862 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xsflf" Jan 29 15:29:40 crc kubenswrapper[4757]: I0129 15:29:40.630823 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xsflf" Jan 29 15:29:41 crc kubenswrapper[4757]: I0129 15:29:41.412755 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xsflf" Jan 29 15:29:42 crc kubenswrapper[4757]: I0129 15:29:42.083616 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9dgf4" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.093687 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm"] Jan 29 15:29:44 crc kubenswrapper[4757]: E0129 15:29:44.093943 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fdc16ee-9212-4598-a14a-826a0558a931" containerName="registry-server" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.093956 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fdc16ee-9212-4598-a14a-826a0558a931" containerName="registry-server" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.094063 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fdc16ee-9212-4598-a14a-826a0558a931" containerName="registry-server" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.095028 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.099402 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zshkv" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.107131 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm"] Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.236913 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-util\") pod \"b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm\" (UID: \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\") " pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.236957 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-bundle\") pod \"b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm\" (UID: \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\") " pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.237065 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74vk2\" (UniqueName: \"kubernetes.io/projected/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-kube-api-access-74vk2\") pod \"b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm\" (UID: \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\") " pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.338501 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74vk2\" (UniqueName: \"kubernetes.io/projected/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-kube-api-access-74vk2\") pod \"b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm\" (UID: \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\") " pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.338754 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-util\") pod \"b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm\" (UID: \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\") " pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.338884 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-bundle\") pod \"b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm\" (UID: \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\") " pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.339328 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-bundle\") pod \"b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm\" (UID: \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\") " pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.339254 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-util\") pod \"b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm\" (UID: \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\") " pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.357315 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74vk2\" (UniqueName: \"kubernetes.io/projected/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-kube-api-access-74vk2\") pod \"b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm\" (UID: \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\") " pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.416634 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" Jan 29 15:29:44 crc kubenswrapper[4757]: I0129 15:29:44.797828 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm"] Jan 29 15:29:45 crc kubenswrapper[4757]: I0129 15:29:45.413503 4757 generic.go:334] "Generic (PLEG): container finished" podID="8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4" containerID="c7e191cf74e7aa6433cc480617abf22480e9052d9c673d035140d199799180a4" exitCode=0 Jan 29 15:29:45 crc kubenswrapper[4757]: I0129 15:29:45.413588 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" event={"ID":"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4","Type":"ContainerDied","Data":"c7e191cf74e7aa6433cc480617abf22480e9052d9c673d035140d199799180a4"} Jan 29 15:29:45 crc kubenswrapper[4757]: I0129 15:29:45.413861 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" event={"ID":"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4","Type":"ContainerStarted","Data":"dcab4e7b40aa6bd31741954ff567c1369024c731fa6b66e7f54e4217fbb1048a"} Jan 29 15:29:46 crc kubenswrapper[4757]: I0129 15:29:46.422713 4757 generic.go:334] "Generic (PLEG): container finished" podID="8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4" containerID="a74c6e67ab28fc0fcbd36f3e53246a423c883e7672b7e5cd69d217e6cb0861e2" exitCode=0 Jan 29 15:29:46 crc kubenswrapper[4757]: I0129 15:29:46.422760 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" event={"ID":"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4","Type":"ContainerDied","Data":"a74c6e67ab28fc0fcbd36f3e53246a423c883e7672b7e5cd69d217e6cb0861e2"} Jan 29 15:29:47 crc kubenswrapper[4757]: I0129 15:29:47.432532 4757 generic.go:334] "Generic (PLEG): container finished" podID="8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4" containerID="369cb5108d88a0b6feb2f5207aacf142eee60f6b5ed1da49867200338164edf2" exitCode=0 Jan 29 15:29:47 crc kubenswrapper[4757]: I0129 15:29:47.432605 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" event={"ID":"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4","Type":"ContainerDied","Data":"369cb5108d88a0b6feb2f5207aacf142eee60f6b5ed1da49867200338164edf2"} Jan 29 15:29:47 crc kubenswrapper[4757]: I0129 15:29:47.605142 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:29:47 crc kubenswrapper[4757]: I0129 15:29:47.605218 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:29:48 crc kubenswrapper[4757]: I0129 15:29:48.697672 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" Jan 29 15:29:48 crc kubenswrapper[4757]: I0129 15:29:48.808302 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74vk2\" (UniqueName: \"kubernetes.io/projected/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-kube-api-access-74vk2\") pod \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\" (UID: \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\") " Jan 29 15:29:48 crc kubenswrapper[4757]: I0129 15:29:48.808400 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-util\") pod \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\" (UID: \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\") " Jan 29 15:29:48 crc kubenswrapper[4757]: I0129 15:29:48.808483 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-bundle\") pod \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\" (UID: \"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4\") " Jan 29 15:29:48 crc kubenswrapper[4757]: I0129 15:29:48.809415 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-bundle" (OuterVolumeSpecName: "bundle") pod "8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4" (UID: "8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:29:48 crc kubenswrapper[4757]: I0129 15:29:48.817460 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-kube-api-access-74vk2" (OuterVolumeSpecName: "kube-api-access-74vk2") pod "8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4" (UID: "8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4"). InnerVolumeSpecName "kube-api-access-74vk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:29:48 crc kubenswrapper[4757]: I0129 15:29:48.822225 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-util" (OuterVolumeSpecName: "util") pod "8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4" (UID: "8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:29:48 crc kubenswrapper[4757]: I0129 15:29:48.910307 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74vk2\" (UniqueName: \"kubernetes.io/projected/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-kube-api-access-74vk2\") on node \"crc\" DevicePath \"\"" Jan 29 15:29:48 crc kubenswrapper[4757]: I0129 15:29:48.910350 4757 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:29:48 crc kubenswrapper[4757]: I0129 15:29:48.910367 4757 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:29:49 crc kubenswrapper[4757]: I0129 15:29:49.449999 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" event={"ID":"8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4","Type":"ContainerDied","Data":"dcab4e7b40aa6bd31741954ff567c1369024c731fa6b66e7f54e4217fbb1048a"} Jan 29 15:29:49 crc kubenswrapper[4757]: I0129 15:29:49.450038 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcab4e7b40aa6bd31741954ff567c1369024c731fa6b66e7f54e4217fbb1048a" Jan 29 15:29:49 crc kubenswrapper[4757]: I0129 15:29:49.450352 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm" Jan 29 15:29:51 crc kubenswrapper[4757]: I0129 15:29:51.629100 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6dff856477-hgxdq"] Jan 29 15:29:51 crc kubenswrapper[4757]: E0129 15:29:51.629618 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4" containerName="pull" Jan 29 15:29:51 crc kubenswrapper[4757]: I0129 15:29:51.629631 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4" containerName="pull" Jan 29 15:29:51 crc kubenswrapper[4757]: E0129 15:29:51.629642 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4" containerName="extract" Jan 29 15:29:51 crc kubenswrapper[4757]: I0129 15:29:51.629647 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4" containerName="extract" Jan 29 15:29:51 crc kubenswrapper[4757]: E0129 15:29:51.629670 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4" containerName="util" Jan 29 15:29:51 crc kubenswrapper[4757]: I0129 15:29:51.629677 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4" containerName="util" Jan 29 15:29:51 crc kubenswrapper[4757]: I0129 15:29:51.629772 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4" containerName="extract" Jan 29 15:29:51 crc kubenswrapper[4757]: I0129 15:29:51.630160 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6dff856477-hgxdq" Jan 29 15:29:51 crc kubenswrapper[4757]: I0129 15:29:51.639042 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-t5bcs" Jan 29 15:29:51 crc kubenswrapper[4757]: I0129 15:29:51.723835 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6dff856477-hgxdq"] Jan 29 15:29:51 crc kubenswrapper[4757]: I0129 15:29:51.750140 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cndlt\" (UniqueName: \"kubernetes.io/projected/f40b9aa0-bc1b-49bf-a4ac-1ac90da4734e-kube-api-access-cndlt\") pod \"openstack-operator-controller-init-6dff856477-hgxdq\" (UID: \"f40b9aa0-bc1b-49bf-a4ac-1ac90da4734e\") " pod="openstack-operators/openstack-operator-controller-init-6dff856477-hgxdq" Jan 29 15:29:51 crc kubenswrapper[4757]: I0129 15:29:51.851904 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cndlt\" (UniqueName: \"kubernetes.io/projected/f40b9aa0-bc1b-49bf-a4ac-1ac90da4734e-kube-api-access-cndlt\") pod \"openstack-operator-controller-init-6dff856477-hgxdq\" (UID: \"f40b9aa0-bc1b-49bf-a4ac-1ac90da4734e\") " pod="openstack-operators/openstack-operator-controller-init-6dff856477-hgxdq" Jan 29 15:29:51 crc kubenswrapper[4757]: I0129 15:29:51.872413 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cndlt\" (UniqueName: \"kubernetes.io/projected/f40b9aa0-bc1b-49bf-a4ac-1ac90da4734e-kube-api-access-cndlt\") pod \"openstack-operator-controller-init-6dff856477-hgxdq\" (UID: \"f40b9aa0-bc1b-49bf-a4ac-1ac90da4734e\") " pod="openstack-operators/openstack-operator-controller-init-6dff856477-hgxdq" Jan 29 15:29:51 crc kubenswrapper[4757]: I0129 15:29:51.949029 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6dff856477-hgxdq" Jan 29 15:29:52 crc kubenswrapper[4757]: I0129 15:29:52.407451 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6dff856477-hgxdq"] Jan 29 15:29:52 crc kubenswrapper[4757]: I0129 15:29:52.468969 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6dff856477-hgxdq" event={"ID":"f40b9aa0-bc1b-49bf-a4ac-1ac90da4734e","Type":"ContainerStarted","Data":"80391c9a8801fc36dad140a163fb3d12f876b98599987ce245ad9c672e9b888c"} Jan 29 15:29:57 crc kubenswrapper[4757]: I0129 15:29:57.551871 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6dff856477-hgxdq" event={"ID":"f40b9aa0-bc1b-49bf-a4ac-1ac90da4734e","Type":"ContainerStarted","Data":"9992e1dc02287e308b497174549007029fbfac840327e236711e9d69a300af0b"} Jan 29 15:29:57 crc kubenswrapper[4757]: I0129 15:29:57.552501 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6dff856477-hgxdq" Jan 29 15:29:57 crc kubenswrapper[4757]: I0129 15:29:57.585447 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6dff856477-hgxdq" podStartSLOduration=2.070471027 podStartE2EDuration="6.585429432s" podCreationTimestamp="2026-01-29 15:29:51 +0000 UTC" firstStartedPulling="2026-01-29 15:29:52.425733976 +0000 UTC m=+1155.714984243" lastFinishedPulling="2026-01-29 15:29:56.940692411 +0000 UTC m=+1160.229942648" observedRunningTime="2026-01-29 15:29:57.579820329 +0000 UTC m=+1160.869070576" watchObservedRunningTime="2026-01-29 15:29:57.585429432 +0000 UTC m=+1160.874679669" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.137162 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq"] Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.138402 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.140960 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.141551 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.143634 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq"] Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.181474 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/085d1565-bbaa-4eda-935d-40f7e302539d-secret-volume\") pod \"collect-profiles-29495010-hjchq\" (UID: \"085d1565-bbaa-4eda-935d-40f7e302539d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.181526 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jtx7\" (UniqueName: \"kubernetes.io/projected/085d1565-bbaa-4eda-935d-40f7e302539d-kube-api-access-9jtx7\") pod \"collect-profiles-29495010-hjchq\" (UID: \"085d1565-bbaa-4eda-935d-40f7e302539d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.181557 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/085d1565-bbaa-4eda-935d-40f7e302539d-config-volume\") pod \"collect-profiles-29495010-hjchq\" (UID: \"085d1565-bbaa-4eda-935d-40f7e302539d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.283955 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/085d1565-bbaa-4eda-935d-40f7e302539d-config-volume\") pod \"collect-profiles-29495010-hjchq\" (UID: \"085d1565-bbaa-4eda-935d-40f7e302539d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.284057 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/085d1565-bbaa-4eda-935d-40f7e302539d-secret-volume\") pod \"collect-profiles-29495010-hjchq\" (UID: \"085d1565-bbaa-4eda-935d-40f7e302539d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.284079 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jtx7\" (UniqueName: \"kubernetes.io/projected/085d1565-bbaa-4eda-935d-40f7e302539d-kube-api-access-9jtx7\") pod \"collect-profiles-29495010-hjchq\" (UID: \"085d1565-bbaa-4eda-935d-40f7e302539d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.285043 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/085d1565-bbaa-4eda-935d-40f7e302539d-config-volume\") pod \"collect-profiles-29495010-hjchq\" (UID: \"085d1565-bbaa-4eda-935d-40f7e302539d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.307133 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jtx7\" (UniqueName: \"kubernetes.io/projected/085d1565-bbaa-4eda-935d-40f7e302539d-kube-api-access-9jtx7\") pod \"collect-profiles-29495010-hjchq\" (UID: \"085d1565-bbaa-4eda-935d-40f7e302539d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.308194 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/085d1565-bbaa-4eda-935d-40f7e302539d-secret-volume\") pod \"collect-profiles-29495010-hjchq\" (UID: \"085d1565-bbaa-4eda-935d-40f7e302539d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.470676 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" Jan 29 15:30:00 crc kubenswrapper[4757]: I0129 15:30:00.704905 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq"] Jan 29 15:30:01 crc kubenswrapper[4757]: E0129 15:30:01.089733 4757 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod085d1565_bbaa_4eda_935d_40f7e302539d.slice/crio-08108aadb1bc6eae473bc9bf59cea506b94520d40533b26ba48173edb2f6e37f.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:30:01 crc kubenswrapper[4757]: I0129 15:30:01.581294 4757 generic.go:334] "Generic (PLEG): container finished" podID="085d1565-bbaa-4eda-935d-40f7e302539d" containerID="08108aadb1bc6eae473bc9bf59cea506b94520d40533b26ba48173edb2f6e37f" exitCode=0 Jan 29 15:30:01 crc kubenswrapper[4757]: I0129 15:30:01.581375 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" event={"ID":"085d1565-bbaa-4eda-935d-40f7e302539d","Type":"ContainerDied","Data":"08108aadb1bc6eae473bc9bf59cea506b94520d40533b26ba48173edb2f6e37f"} Jan 29 15:30:01 crc kubenswrapper[4757]: I0129 15:30:01.582595 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" event={"ID":"085d1565-bbaa-4eda-935d-40f7e302539d","Type":"ContainerStarted","Data":"d580e0fccb0a5b6100da5be0ea1dda911cc2b651872959933ea4efd53a38871d"} Jan 29 15:30:02 crc kubenswrapper[4757]: I0129 15:30:02.853985 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" Jan 29 15:30:02 crc kubenswrapper[4757]: I0129 15:30:02.917072 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jtx7\" (UniqueName: \"kubernetes.io/projected/085d1565-bbaa-4eda-935d-40f7e302539d-kube-api-access-9jtx7\") pod \"085d1565-bbaa-4eda-935d-40f7e302539d\" (UID: \"085d1565-bbaa-4eda-935d-40f7e302539d\") " Jan 29 15:30:02 crc kubenswrapper[4757]: I0129 15:30:02.917171 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/085d1565-bbaa-4eda-935d-40f7e302539d-secret-volume\") pod \"085d1565-bbaa-4eda-935d-40f7e302539d\" (UID: \"085d1565-bbaa-4eda-935d-40f7e302539d\") " Jan 29 15:30:02 crc kubenswrapper[4757]: I0129 15:30:02.917198 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/085d1565-bbaa-4eda-935d-40f7e302539d-config-volume\") pod \"085d1565-bbaa-4eda-935d-40f7e302539d\" (UID: \"085d1565-bbaa-4eda-935d-40f7e302539d\") " Jan 29 15:30:02 crc kubenswrapper[4757]: I0129 15:30:02.917845 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085d1565-bbaa-4eda-935d-40f7e302539d-config-volume" (OuterVolumeSpecName: "config-volume") pod "085d1565-bbaa-4eda-935d-40f7e302539d" (UID: "085d1565-bbaa-4eda-935d-40f7e302539d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:30:02 crc kubenswrapper[4757]: I0129 15:30:02.923460 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085d1565-bbaa-4eda-935d-40f7e302539d-kube-api-access-9jtx7" (OuterVolumeSpecName: "kube-api-access-9jtx7") pod "085d1565-bbaa-4eda-935d-40f7e302539d" (UID: "085d1565-bbaa-4eda-935d-40f7e302539d"). InnerVolumeSpecName "kube-api-access-9jtx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:30:02 crc kubenswrapper[4757]: I0129 15:30:02.923478 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085d1565-bbaa-4eda-935d-40f7e302539d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "085d1565-bbaa-4eda-935d-40f7e302539d" (UID: "085d1565-bbaa-4eda-935d-40f7e302539d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:30:03 crc kubenswrapper[4757]: I0129 15:30:03.018922 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jtx7\" (UniqueName: \"kubernetes.io/projected/085d1565-bbaa-4eda-935d-40f7e302539d-kube-api-access-9jtx7\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:03 crc kubenswrapper[4757]: I0129 15:30:03.018954 4757 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/085d1565-bbaa-4eda-935d-40f7e302539d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:03 crc kubenswrapper[4757]: I0129 15:30:03.018965 4757 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/085d1565-bbaa-4eda-935d-40f7e302539d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:30:03 crc kubenswrapper[4757]: I0129 15:30:03.601731 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" event={"ID":"085d1565-bbaa-4eda-935d-40f7e302539d","Type":"ContainerDied","Data":"d580e0fccb0a5b6100da5be0ea1dda911cc2b651872959933ea4efd53a38871d"} Jan 29 15:30:03 crc kubenswrapper[4757]: I0129 15:30:03.602092 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d580e0fccb0a5b6100da5be0ea1dda911cc2b651872959933ea4efd53a38871d" Jan 29 15:30:03 crc kubenswrapper[4757]: I0129 15:30:03.601812 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-hjchq" Jan 29 15:30:11 crc kubenswrapper[4757]: I0129 15:30:11.951917 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6dff856477-hgxdq" Jan 29 15:30:17 crc kubenswrapper[4757]: I0129 15:30:17.604660 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:30:17 crc kubenswrapper[4757]: I0129 15:30:17.605170 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:30:17 crc kubenswrapper[4757]: I0129 15:30:17.605221 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:30:17 crc kubenswrapper[4757]: I0129 15:30:17.605857 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26224c213349170284329c384d8b105e9ad831590acee9b01765c926f542d25f"} pod="openshift-machine-config-operator/machine-config-daemon-45q8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:30:17 crc kubenswrapper[4757]: I0129 15:30:17.605931 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" containerID="cri-o://26224c213349170284329c384d8b105e9ad831590acee9b01765c926f542d25f" gracePeriod=600 Jan 29 15:30:18 crc kubenswrapper[4757]: I0129 15:30:18.694529 4757 generic.go:334] "Generic (PLEG): container finished" podID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerID="26224c213349170284329c384d8b105e9ad831590acee9b01765c926f542d25f" exitCode=0 Jan 29 15:30:18 crc kubenswrapper[4757]: I0129 15:30:18.694602 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerDied","Data":"26224c213349170284329c384d8b105e9ad831590acee9b01765c926f542d25f"} Jan 29 15:30:18 crc kubenswrapper[4757]: I0129 15:30:18.695460 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"9184ceb222e3aaf913deba2b1b97656dc4b2da7e3588e7cc528958150153f8ad"} Jan 29 15:30:18 crc kubenswrapper[4757]: I0129 15:30:18.695481 4757 scope.go:117] "RemoveContainer" containerID="42c07bced4a4cc16b1156866e5aecd360caa654a1d5d2eaf998a21db73871643" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.284586 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k"] Jan 29 15:30:29 crc kubenswrapper[4757]: E0129 15:30:29.293432 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085d1565-bbaa-4eda-935d-40f7e302539d" containerName="collect-profiles" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.293477 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="085d1565-bbaa-4eda-935d-40f7e302539d" containerName="collect-profiles" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.293900 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="085d1565-bbaa-4eda-935d-40f7e302539d" containerName="collect-profiles" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.294604 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.301708 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-k97c7" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.306662 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.307913 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.335799 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-hsvlv" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.341605 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.342351 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.347712 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-9mlhf" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.357643 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.365341 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.373343 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.374246 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.377230 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-g2jmn" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.387018 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.388000 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.389613 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-544d6" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.391300 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.422464 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.423183 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.423257 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.430713 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rn9ds" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.457383 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.458324 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.461231 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.464522 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.464610 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2jhrc" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.491346 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.493748 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52gz5\" (UniqueName: \"kubernetes.io/projected/0ae0f41a-2010-4578-a849-a47110a5cad7-kube-api-access-52gz5\") pod \"heat-operator-controller-manager-d8b84fbc-qrdfv\" (UID: \"0ae0f41a-2010-4578-a849-a47110a5cad7\") " pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.493808 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9xgx\" (UniqueName: \"kubernetes.io/projected/dc003609-336a-4cc2-a0fa-e3cd693a803d-kube-api-access-b9xgx\") pod \"designate-operator-controller-manager-dd77988f8-h7w6l\" (UID: \"dc003609-336a-4cc2-a0fa-e3cd693a803d\") " pod="openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.493842 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr4zb\" (UniqueName: \"kubernetes.io/projected/629b88f8-504a-4e19-914a-7359c131deb2-kube-api-access-gr4zb\") pod \"barbican-operator-controller-manager-79f547bdd5-7bg8k\" (UID: \"629b88f8-504a-4e19-914a-7359c131deb2\") " pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.493943 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbnb8\" (UniqueName: \"kubernetes.io/projected/2db120e3-48a1-46c6-9d75-9e60012dcff4-kube-api-access-gbnb8\") pod \"cinder-operator-controller-manager-858d89fd-hf2f8\" (UID: \"2db120e3-48a1-46c6-9d75-9e60012dcff4\") " pod="openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.493981 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmh9g\" (UniqueName: \"kubernetes.io/projected/dc96ab98-0882-4c4c-8011-642f5da0ce8d-kube-api-access-cmh9g\") pod \"glance-operator-controller-manager-f8c4db9df-76jqr\" (UID: \"dc96ab98-0882-4c4c-8011-642f5da0ce8d\") " pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.507304 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.546665 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.547393 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.554623 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-tmknb" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.586324 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.599944 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr4zb\" (UniqueName: \"kubernetes.io/projected/629b88f8-504a-4e19-914a-7359c131deb2-kube-api-access-gr4zb\") pod \"barbican-operator-controller-manager-79f547bdd5-7bg8k\" (UID: \"629b88f8-504a-4e19-914a-7359c131deb2\") " pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.599994 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m6rl\" (UniqueName: \"kubernetes.io/projected/edc8f287-a4c1-4558-b279-5159e135e838-kube-api-access-7m6rl\") pod \"horizon-operator-controller-manager-5fb775575f-s5px2\" (UID: \"edc8f287-a4c1-4558-b279-5159e135e838\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.600027 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbnb8\" (UniqueName: \"kubernetes.io/projected/2db120e3-48a1-46c6-9d75-9e60012dcff4-kube-api-access-gbnb8\") pod \"cinder-operator-controller-manager-858d89fd-hf2f8\" (UID: \"2db120e3-48a1-46c6-9d75-9e60012dcff4\") " pod="openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.600059 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmh9g\" (UniqueName: \"kubernetes.io/projected/dc96ab98-0882-4c4c-8011-642f5da0ce8d-kube-api-access-cmh9g\") pod \"glance-operator-controller-manager-f8c4db9df-76jqr\" (UID: \"dc96ab98-0882-4c4c-8011-642f5da0ce8d\") " pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.600097 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert\") pod \"infra-operator-controller-manager-79955696d6-qzz5n\" (UID: \"5d2d32e1-adbe-4b24-bd98-0e51a52283f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.600119 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52gz5\" (UniqueName: \"kubernetes.io/projected/0ae0f41a-2010-4578-a849-a47110a5cad7-kube-api-access-52gz5\") pod \"heat-operator-controller-manager-d8b84fbc-qrdfv\" (UID: \"0ae0f41a-2010-4578-a849-a47110a5cad7\") " pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.600138 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf7rd\" (UniqueName: \"kubernetes.io/projected/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-kube-api-access-cf7rd\") pod \"infra-operator-controller-manager-79955696d6-qzz5n\" (UID: \"5d2d32e1-adbe-4b24-bd98-0e51a52283f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.600176 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9xgx\" (UniqueName: \"kubernetes.io/projected/dc003609-336a-4cc2-a0fa-e3cd693a803d-kube-api-access-b9xgx\") pod \"designate-operator-controller-manager-dd77988f8-h7w6l\" (UID: \"dc003609-336a-4cc2-a0fa-e3cd693a803d\") " pod="openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.600916 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.601672 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.606988 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-8pgxg" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.611335 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.620333 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-76c896469f-lflf2"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.621263 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-76c896469f-lflf2" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.626574 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-ld9bm" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.635884 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-76c896469f-lflf2"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.652249 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9xgx\" (UniqueName: \"kubernetes.io/projected/dc003609-336a-4cc2-a0fa-e3cd693a803d-kube-api-access-b9xgx\") pod \"designate-operator-controller-manager-dd77988f8-h7w6l\" (UID: \"dc003609-336a-4cc2-a0fa-e3cd693a803d\") " pod="openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.652974 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52gz5\" (UniqueName: \"kubernetes.io/projected/0ae0f41a-2010-4578-a849-a47110a5cad7-kube-api-access-52gz5\") pod \"heat-operator-controller-manager-d8b84fbc-qrdfv\" (UID: \"0ae0f41a-2010-4578-a849-a47110a5cad7\") " pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.658079 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbnb8\" (UniqueName: \"kubernetes.io/projected/2db120e3-48a1-46c6-9d75-9e60012dcff4-kube-api-access-gbnb8\") pod \"cinder-operator-controller-manager-858d89fd-hf2f8\" (UID: \"2db120e3-48a1-46c6-9d75-9e60012dcff4\") " pod="openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.662352 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.663351 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.668775 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rqnrj" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.674243 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.677816 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr4zb\" (UniqueName: \"kubernetes.io/projected/629b88f8-504a-4e19-914a-7359c131deb2-kube-api-access-gr4zb\") pod \"barbican-operator-controller-manager-79f547bdd5-7bg8k\" (UID: \"629b88f8-504a-4e19-914a-7359c131deb2\") " pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.705857 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert\") pod \"infra-operator-controller-manager-79955696d6-qzz5n\" (UID: \"5d2d32e1-adbe-4b24-bd98-0e51a52283f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.705902 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf7rd\" (UniqueName: \"kubernetes.io/projected/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-kube-api-access-cf7rd\") pod \"infra-operator-controller-manager-79955696d6-qzz5n\" (UID: \"5d2d32e1-adbe-4b24-bd98-0e51a52283f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.705924 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54n8x\" (UniqueName: \"kubernetes.io/projected/eb034926-25ee-4735-a9c4-407c7cd152a4-kube-api-access-54n8x\") pod \"ironic-operator-controller-manager-866c9d5b98-tbvmq\" (UID: \"eb034926-25ee-4735-a9c4-407c7cd152a4\") " pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.705967 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m6rl\" (UniqueName: \"kubernetes.io/projected/edc8f287-a4c1-4558-b279-5159e135e838-kube-api-access-7m6rl\") pod \"horizon-operator-controller-manager-5fb775575f-s5px2\" (UID: \"edc8f287-a4c1-4558-b279-5159e135e838\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.705986 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd5h7\" (UniqueName: \"kubernetes.io/projected/e9b2ed23-04f3-479f-870f-10f54f6ecab9-kube-api-access-kd5h7\") pod \"keystone-operator-controller-manager-8ccc8547b-jh2fm\" (UID: \"e9b2ed23-04f3-479f-870f-10f54f6ecab9\") " pod="openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm" Jan 29 15:30:29 crc kubenswrapper[4757]: E0129 15:30:29.706237 4757 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:30:29 crc kubenswrapper[4757]: E0129 15:30:29.706291 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert podName:5d2d32e1-adbe-4b24-bd98-0e51a52283f5 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:30.206276696 +0000 UTC m=+1193.495526933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert") pod "infra-operator-controller-manager-79955696d6-qzz5n" (UID: "5d2d32e1-adbe-4b24-bd98-0e51a52283f5") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.707807 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmh9g\" (UniqueName: \"kubernetes.io/projected/dc96ab98-0882-4c4c-8011-642f5da0ce8d-kube-api-access-cmh9g\") pod \"glance-operator-controller-manager-f8c4db9df-76jqr\" (UID: \"dc96ab98-0882-4c4c-8011-642f5da0ce8d\") " pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.708168 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.712649 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.742621 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.743565 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.750325 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.750903 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.758630 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-md4hg" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.764282 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf7rd\" (UniqueName: \"kubernetes.io/projected/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-kube-api-access-cf7rd\") pod \"infra-operator-controller-manager-79955696d6-qzz5n\" (UID: \"5d2d32e1-adbe-4b24-bd98-0e51a52283f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.789518 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m6rl\" (UniqueName: \"kubernetes.io/projected/edc8f287-a4c1-4558-b279-5159e135e838-kube-api-access-7m6rl\") pod \"horizon-operator-controller-manager-5fb775575f-s5px2\" (UID: \"edc8f287-a4c1-4558-b279-5159e135e838\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.793913 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.807655 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54n8x\" (UniqueName: \"kubernetes.io/projected/eb034926-25ee-4735-a9c4-407c7cd152a4-kube-api-access-54n8x\") pod \"ironic-operator-controller-manager-866c9d5b98-tbvmq\" (UID: \"eb034926-25ee-4735-a9c4-407c7cd152a4\") " pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.807704 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd9wz\" (UniqueName: \"kubernetes.io/projected/d75a2490-77f1-41f0-b9c5-efcc7a2e520c-kube-api-access-dd9wz\") pod \"manila-operator-controller-manager-76c896469f-lflf2\" (UID: \"d75a2490-77f1-41f0-b9c5-efcc7a2e520c\") " pod="openstack-operators/manila-operator-controller-manager-76c896469f-lflf2" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.807754 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd5h7\" (UniqueName: \"kubernetes.io/projected/e9b2ed23-04f3-479f-870f-10f54f6ecab9-kube-api-access-kd5h7\") pod \"keystone-operator-controller-manager-8ccc8547b-jh2fm\" (UID: \"e9b2ed23-04f3-479f-870f-10f54f6ecab9\") " pod="openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.807820 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnfxp\" (UniqueName: \"kubernetes.io/projected/635077c8-931b-4bda-b7dc-117279b97a5e-kube-api-access-lnfxp\") pod \"mariadb-operator-controller-manager-67bf948998-2kd46\" (UID: \"635077c8-931b-4bda-b7dc-117279b97a5e\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.817900 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.818830 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.822932 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-r55fp" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.843648 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54n8x\" (UniqueName: \"kubernetes.io/projected/eb034926-25ee-4735-a9c4-407c7cd152a4-kube-api-access-54n8x\") pod \"ironic-operator-controller-manager-866c9d5b98-tbvmq\" (UID: \"eb034926-25ee-4735-a9c4-407c7cd152a4\") " pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.855922 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x"] Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.856768 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.858170 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd5h7\" (UniqueName: \"kubernetes.io/projected/e9b2ed23-04f3-479f-870f-10f54f6ecab9-kube-api-access-kd5h7\") pod \"keystone-operator-controller-manager-8ccc8547b-jh2fm\" (UID: \"e9b2ed23-04f3-479f-870f-10f54f6ecab9\") " pod="openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.863758 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dkhwc" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.872936 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.909324 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd9wz\" (UniqueName: \"kubernetes.io/projected/d75a2490-77f1-41f0-b9c5-efcc7a2e520c-kube-api-access-dd9wz\") pod \"manila-operator-controller-manager-76c896469f-lflf2\" (UID: \"d75a2490-77f1-41f0-b9c5-efcc7a2e520c\") " pod="openstack-operators/manila-operator-controller-manager-76c896469f-lflf2" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.909437 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnfxp\" (UniqueName: \"kubernetes.io/projected/635077c8-931b-4bda-b7dc-117279b97a5e-kube-api-access-lnfxp\") pod \"mariadb-operator-controller-manager-67bf948998-2kd46\" (UID: \"635077c8-931b-4bda-b7dc-117279b97a5e\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.909480 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwwk6\" (UniqueName: \"kubernetes.io/projected/0180bde3-8b8c-4ffe-a5d2-cc39199feb28-kube-api-access-mwwk6\") pod \"neutron-operator-controller-manager-7c7cc6ff45-gpkbd\" (UID: \"0180bde3-8b8c-4ffe-a5d2-cc39199feb28\") " pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.928477 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.954316 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnfxp\" (UniqueName: \"kubernetes.io/projected/635077c8-931b-4bda-b7dc-117279b97a5e-kube-api-access-lnfxp\") pod \"mariadb-operator-controller-manager-67bf948998-2kd46\" (UID: \"635077c8-931b-4bda-b7dc-117279b97a5e\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.958594 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd9wz\" (UniqueName: \"kubernetes.io/projected/d75a2490-77f1-41f0-b9c5-efcc7a2e520c-kube-api-access-dd9wz\") pod \"manila-operator-controller-manager-76c896469f-lflf2\" (UID: \"d75a2490-77f1-41f0-b9c5-efcc7a2e520c\") " pod="openstack-operators/manila-operator-controller-manager-76c896469f-lflf2" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.964737 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" Jan 29 15:30:29 crc kubenswrapper[4757]: I0129 15:30:29.965399 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.005241 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.012726 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwwk6\" (UniqueName: \"kubernetes.io/projected/0180bde3-8b8c-4ffe-a5d2-cc39199feb28-kube-api-access-mwwk6\") pod \"neutron-operator-controller-manager-7c7cc6ff45-gpkbd\" (UID: \"0180bde3-8b8c-4ffe-a5d2-cc39199feb28\") " pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.012808 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9688\" (UniqueName: \"kubernetes.io/projected/5590a40a-b378-4912-881d-68b46fb6564d-kube-api-access-w9688\") pod \"octavia-operator-controller-manager-68f8cb846c-kng6x\" (UID: \"5590a40a-b378-4912-881d-68b46fb6564d\") " pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.012886 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w69fj\" (UniqueName: \"kubernetes.io/projected/a5549d49-38a8-4441-8200-6381ddf682b6-kube-api-access-w69fj\") pod \"nova-operator-controller-manager-68cb478976-5rfk2\" (UID: \"a5549d49-38a8-4441-8200-6381ddf682b6\") " pod="openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.018730 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.019703 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.023549 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.023593 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-z778s" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.066099 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.068950 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.075619 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.077934 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.080564 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwwk6\" (UniqueName: \"kubernetes.io/projected/0180bde3-8b8c-4ffe-a5d2-cc39199feb28-kube-api-access-mwwk6\") pod \"neutron-operator-controller-manager-7c7cc6ff45-gpkbd\" (UID: \"0180bde3-8b8c-4ffe-a5d2-cc39199feb28\") " pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.080600 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-sr64z" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.113993 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w69fj\" (UniqueName: \"kubernetes.io/projected/a5549d49-38a8-4441-8200-6381ddf682b6-kube-api-access-w69fj\") pod \"nova-operator-controller-manager-68cb478976-5rfk2\" (UID: \"a5549d49-38a8-4441-8200-6381ddf682b6\") " pod="openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.114050 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkxzm\" (UniqueName: \"kubernetes.io/projected/5297dfef-4739-4076-99f2-462bf83c4b4b-kube-api-access-lkxzm\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj\" (UID: \"5297dfef-4739-4076-99f2-462bf83c4b4b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.114139 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj\" (UID: \"5297dfef-4739-4076-99f2-462bf83c4b4b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.114195 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9688\" (UniqueName: \"kubernetes.io/projected/5590a40a-b378-4912-881d-68b46fb6564d-kube-api-access-w9688\") pod \"octavia-operator-controller-manager-68f8cb846c-kng6x\" (UID: \"5590a40a-b378-4912-881d-68b46fb6564d\") " pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.116936 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.117320 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.141848 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.149070 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w69fj\" (UniqueName: \"kubernetes.io/projected/a5549d49-38a8-4441-8200-6381ddf682b6-kube-api-access-w69fj\") pod \"nova-operator-controller-manager-68cb478976-5rfk2\" (UID: \"a5549d49-38a8-4441-8200-6381ddf682b6\") " pod="openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.164081 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.170792 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9688\" (UniqueName: \"kubernetes.io/projected/5590a40a-b378-4912-881d-68b46fb6564d-kube-api-access-w9688\") pod \"octavia-operator-controller-manager-68f8cb846c-kng6x\" (UID: \"5590a40a-b378-4912-881d-68b46fb6564d\") " pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.188822 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.189713 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.202866 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-xwr7c" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.203057 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.204027 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.206990 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.207197 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-nmpr6" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.210723 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.213619 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.216169 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.216625 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj\" (UID: \"5297dfef-4739-4076-99f2-462bf83c4b4b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.216653 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert\") pod \"infra-operator-controller-manager-79955696d6-qzz5n\" (UID: \"5d2d32e1-adbe-4b24-bd98-0e51a52283f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.216677 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5xnl\" (UniqueName: \"kubernetes.io/projected/c7d33f5e-ce62-40e5-9400-c28c1cb50753-kube-api-access-l5xnl\") pod \"ovn-operator-controller-manager-788c46999f-82wnl\" (UID: \"c7d33f5e-ce62-40e5-9400-c28c1cb50753\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.216726 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkxzm\" (UniqueName: \"kubernetes.io/projected/5297dfef-4739-4076-99f2-462bf83c4b4b-kube-api-access-lkxzm\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj\" (UID: \"5297dfef-4739-4076-99f2-462bf83c4b4b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:30:30 crc kubenswrapper[4757]: E0129 15:30:30.217041 4757 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:30:30 crc kubenswrapper[4757]: E0129 15:30:30.217075 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert podName:5297dfef-4739-4076-99f2-462bf83c4b4b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:30.717063493 +0000 UTC m=+1194.006313730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" (UID: "5297dfef-4739-4076-99f2-462bf83c4b4b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:30:30 crc kubenswrapper[4757]: E0129 15:30:30.217210 4757 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:30:30 crc kubenswrapper[4757]: E0129 15:30:30.217230 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert podName:5d2d32e1-adbe-4b24-bd98-0e51a52283f5 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:31.217222978 +0000 UTC m=+1194.506473215 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert") pod "infra-operator-controller-manager-79955696d6-qzz5n" (UID: "5d2d32e1-adbe-4b24-bd98-0e51a52283f5") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.225012 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-k87cw" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.242493 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-76c896469f-lflf2" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.270598 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.280140 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkxzm\" (UniqueName: \"kubernetes.io/projected/5297dfef-4739-4076-99f2-462bf83c4b4b-kube-api-access-lkxzm\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj\" (UID: \"5297dfef-4739-4076-99f2-462bf83c4b4b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.280185 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.319367 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljjhs\" (UniqueName: \"kubernetes.io/projected/a921cf1b-0823-487b-9b4f-eb7eefca9cb5-kube-api-access-ljjhs\") pod \"swift-operator-controller-manager-6f7455757b-zfvjn\" (UID: \"a921cf1b-0823-487b-9b4f-eb7eefca9cb5\") " pod="openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.319417 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb8hq\" (UniqueName: \"kubernetes.io/projected/1373c007-6220-40ca-a9a7-176d6779ff9e-kube-api-access-sb8hq\") pod \"telemetry-operator-controller-manager-6cf8c44c7-grncr\" (UID: \"1373c007-6220-40ca-a9a7-176d6779ff9e\") " pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.319465 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nmtq\" (UniqueName: \"kubernetes.io/projected/4ab1a5d0-6fc4-4081-85d6-047635db038e-kube-api-access-2nmtq\") pod \"placement-operator-controller-manager-5b964cf4cd-2zgqs\" (UID: \"4ab1a5d0-6fc4-4081-85d6-047635db038e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.319523 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5xnl\" (UniqueName: \"kubernetes.io/projected/c7d33f5e-ce62-40e5-9400-c28c1cb50753-kube-api-access-l5xnl\") pod \"ovn-operator-controller-manager-788c46999f-82wnl\" (UID: \"c7d33f5e-ce62-40e5-9400-c28c1cb50753\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.319992 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.320999 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.324662 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-v8rrv" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.335669 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.355890 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5xnl\" (UniqueName: \"kubernetes.io/projected/c7d33f5e-ce62-40e5-9400-c28c1cb50753-kube-api-access-l5xnl\") pod \"ovn-operator-controller-manager-788c46999f-82wnl\" (UID: \"c7d33f5e-ce62-40e5-9400-c28c1cb50753\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.411750 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.412766 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.417700 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-t8vzl" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.421914 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7w4n\" (UniqueName: \"kubernetes.io/projected/0971e983-bccd-421c-8171-212672e8b8b7-kube-api-access-b7w4n\") pod \"test-operator-controller-manager-56f8bfcd9f-dmc9f\" (UID: \"0971e983-bccd-421c-8171-212672e8b8b7\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.421979 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljjhs\" (UniqueName: \"kubernetes.io/projected/a921cf1b-0823-487b-9b4f-eb7eefca9cb5-kube-api-access-ljjhs\") pod \"swift-operator-controller-manager-6f7455757b-zfvjn\" (UID: \"a921cf1b-0823-487b-9b4f-eb7eefca9cb5\") " pod="openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.422004 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb8hq\" (UniqueName: \"kubernetes.io/projected/1373c007-6220-40ca-a9a7-176d6779ff9e-kube-api-access-sb8hq\") pod \"telemetry-operator-controller-manager-6cf8c44c7-grncr\" (UID: \"1373c007-6220-40ca-a9a7-176d6779ff9e\") " pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.422049 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nmtq\" (UniqueName: \"kubernetes.io/projected/4ab1a5d0-6fc4-4081-85d6-047635db038e-kube-api-access-2nmtq\") pod \"placement-operator-controller-manager-5b964cf4cd-2zgqs\" (UID: \"4ab1a5d0-6fc4-4081-85d6-047635db038e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.437294 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.437756 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.479950 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb8hq\" (UniqueName: \"kubernetes.io/projected/1373c007-6220-40ca-a9a7-176d6779ff9e-kube-api-access-sb8hq\") pod \"telemetry-operator-controller-manager-6cf8c44c7-grncr\" (UID: \"1373c007-6220-40ca-a9a7-176d6779ff9e\") " pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.481505 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nmtq\" (UniqueName: \"kubernetes.io/projected/4ab1a5d0-6fc4-4081-85d6-047635db038e-kube-api-access-2nmtq\") pod \"placement-operator-controller-manager-5b964cf4cd-2zgqs\" (UID: \"4ab1a5d0-6fc4-4081-85d6-047635db038e\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.485661 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljjhs\" (UniqueName: \"kubernetes.io/projected/a921cf1b-0823-487b-9b4f-eb7eefca9cb5-kube-api-access-ljjhs\") pod \"swift-operator-controller-manager-6f7455757b-zfvjn\" (UID: \"a921cf1b-0823-487b-9b4f-eb7eefca9cb5\") " pod="openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.488576 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.489871 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.499816 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.500001 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-shpll" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.500129 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.501610 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.523386 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7w4n\" (UniqueName: \"kubernetes.io/projected/0971e983-bccd-421c-8171-212672e8b8b7-kube-api-access-b7w4n\") pod \"test-operator-controller-manager-56f8bfcd9f-dmc9f\" (UID: \"0971e983-bccd-421c-8171-212672e8b8b7\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.523433 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x2gt\" (UniqueName: \"kubernetes.io/projected/fd851f0e-29f7-44b9-8c6e-f3b66a90c6b6-kube-api-access-7x2gt\") pod \"watcher-operator-controller-manager-59f4c7d7c4-6z2bh\" (UID: \"fd851f0e-29f7-44b9-8c6e-f3b66a90c6b6\") " pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.529492 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.530419 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.543010 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.543348 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-m2chf" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.549292 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.550109 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.571901 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7w4n\" (UniqueName: \"kubernetes.io/projected/0971e983-bccd-421c-8171-212672e8b8b7-kube-api-access-b7w4n\") pod \"test-operator-controller-manager-56f8bfcd9f-dmc9f\" (UID: \"0971e983-bccd-421c-8171-212672e8b8b7\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.571970 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.625793 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fljvp\" (UniqueName: \"kubernetes.io/projected/2c9cefc6-204f-42c8-b7a6-2c2776617a58-kube-api-access-fljvp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fdwks\" (UID: \"2c9cefc6-204f-42c8-b7a6-2c2776617a58\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.625851 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.625933 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.626001 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr52f\" (UniqueName: \"kubernetes.io/projected/e25703a2-f64f-43ff-b95f-3c9640fd9029-kube-api-access-xr52f\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.626031 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x2gt\" (UniqueName: \"kubernetes.io/projected/fd851f0e-29f7-44b9-8c6e-f3b66a90c6b6-kube-api-access-7x2gt\") pod \"watcher-operator-controller-manager-59f4c7d7c4-6z2bh\" (UID: \"fd851f0e-29f7-44b9-8c6e-f3b66a90c6b6\") " pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.644514 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.674685 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x2gt\" (UniqueName: \"kubernetes.io/projected/fd851f0e-29f7-44b9-8c6e-f3b66a90c6b6-kube-api-access-7x2gt\") pod \"watcher-operator-controller-manager-59f4c7d7c4-6z2bh\" (UID: \"fd851f0e-29f7-44b9-8c6e-f3b66a90c6b6\") " pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.727679 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fljvp\" (UniqueName: \"kubernetes.io/projected/2c9cefc6-204f-42c8-b7a6-2c2776617a58-kube-api-access-fljvp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fdwks\" (UID: \"2c9cefc6-204f-42c8-b7a6-2c2776617a58\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.727728 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.727759 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj\" (UID: \"5297dfef-4739-4076-99f2-462bf83c4b4b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.727798 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.727841 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr52f\" (UniqueName: \"kubernetes.io/projected/e25703a2-f64f-43ff-b95f-3c9640fd9029-kube-api-access-xr52f\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:30 crc kubenswrapper[4757]: E0129 15:30:30.728294 4757 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:30:30 crc kubenswrapper[4757]: E0129 15:30:30.728360 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert podName:5297dfef-4739-4076-99f2-462bf83c4b4b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:31.728343885 +0000 UTC m=+1195.017594122 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" (UID: "5297dfef-4739-4076-99f2-462bf83c4b4b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:30:30 crc kubenswrapper[4757]: E0129 15:30:30.728539 4757 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:30:30 crc kubenswrapper[4757]: E0129 15:30:30.728597 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs podName:e25703a2-f64f-43ff-b95f-3c9640fd9029 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:31.228566761 +0000 UTC m=+1194.517816998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs") pod "openstack-operator-controller-manager-5cbc58956b-jn7tc" (UID: "e25703a2-f64f-43ff-b95f-3c9640fd9029") : secret "webhook-server-cert" not found Jan 29 15:30:30 crc kubenswrapper[4757]: E0129 15:30:30.728638 4757 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:30:30 crc kubenswrapper[4757]: E0129 15:30:30.728662 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs podName:e25703a2-f64f-43ff-b95f-3c9640fd9029 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:31.228655314 +0000 UTC m=+1194.517905551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs") pod "openstack-operator-controller-manager-5cbc58956b-jn7tc" (UID: "e25703a2-f64f-43ff-b95f-3c9640fd9029") : secret "metrics-server-cert" not found Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.759315 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fljvp\" (UniqueName: \"kubernetes.io/projected/2c9cefc6-204f-42c8-b7a6-2c2776617a58-kube-api-access-fljvp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fdwks\" (UID: \"2c9cefc6-204f-42c8-b7a6-2c2776617a58\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.768618 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.779964 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr52f\" (UniqueName: \"kubernetes.io/projected/e25703a2-f64f-43ff-b95f-3c9640fd9029-kube-api-access-xr52f\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.811165 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8"] Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.862675 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks" Jan 29 15:30:30 crc kubenswrapper[4757]: I0129 15:30:30.870835 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8" event={"ID":"2db120e3-48a1-46c6-9d75-9e60012dcff4","Type":"ContainerStarted","Data":"c0b01047fe74435c049e495a1c221b43023736a348fe14c4b3613eb2633ba6f1"} Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.179900 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l"] Jan 29 15:30:31 crc kubenswrapper[4757]: W0129 15:30:31.203474 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc003609_336a_4cc2_a0fa_e3cd693a803d.slice/crio-75a4ec8b06f56ce484fc50dc13bc7695782795e148d4a84b0e9eb17c54a38ab5 WatchSource:0}: Error finding container 75a4ec8b06f56ce484fc50dc13bc7695782795e148d4a84b0e9eb17c54a38ab5: Status 404 returned error can't find the container with id 75a4ec8b06f56ce484fc50dc13bc7695782795e148d4a84b0e9eb17c54a38ab5 Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.240988 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.241049 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert\") pod \"infra-operator-controller-manager-79955696d6-qzz5n\" (UID: \"5d2d32e1-adbe-4b24-bd98-0e51a52283f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.241069 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.241316 4757 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.241364 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs podName:e25703a2-f64f-43ff-b95f-3c9640fd9029 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:32.241351187 +0000 UTC m=+1195.530601424 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs") pod "openstack-operator-controller-manager-5cbc58956b-jn7tc" (UID: "e25703a2-f64f-43ff-b95f-3c9640fd9029") : secret "webhook-server-cert" not found Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.241675 4757 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.241697 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs podName:e25703a2-f64f-43ff-b95f-3c9640fd9029 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:32.241689807 +0000 UTC m=+1195.530940044 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs") pod "openstack-operator-controller-manager-5cbc58956b-jn7tc" (UID: "e25703a2-f64f-43ff-b95f-3c9640fd9029") : secret "metrics-server-cert" not found Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.241738 4757 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.241762 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert podName:5d2d32e1-adbe-4b24-bd98-0e51a52283f5 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:33.241754459 +0000 UTC m=+1196.531004696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert") pod "infra-operator-controller-manager-79955696d6-qzz5n" (UID: "5d2d32e1-adbe-4b24-bd98-0e51a52283f5") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:30:31 crc kubenswrapper[4757]: W0129 15:30:31.388084 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod635077c8_931b_4bda_b7dc_117279b97a5e.slice/crio-9e28c0b6d4343421089b719e9e9661878b71b665ef77129d6c67c2032cdccc9b WatchSource:0}: Error finding container 9e28c0b6d4343421089b719e9e9661878b71b665ef77129d6c67c2032cdccc9b: Status 404 returned error can't find the container with id 9e28c0b6d4343421089b719e9e9661878b71b665ef77129d6c67c2032cdccc9b Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.389725 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46"] Jan 29 15:30:31 crc kubenswrapper[4757]: W0129 15:30:31.395724 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ae0f41a_2010_4578_a849_a47110a5cad7.slice/crio-e6b751a7fb1fb57f739e6768b34c0d26961e9525c2accd2c01762143338cfd77 WatchSource:0}: Error finding container e6b751a7fb1fb57f739e6768b34c0d26961e9525c2accd2c01762143338cfd77: Status 404 returned error can't find the container with id e6b751a7fb1fb57f739e6768b34c0d26961e9525c2accd2c01762143338cfd77 Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.406699 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv"] Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.419474 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm"] Jan 29 15:30:31 crc kubenswrapper[4757]: W0129 15:30:31.451794 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9b2ed23_04f3_479f_870f_10f54f6ecab9.slice/crio-e5d19f95f094f954edfd1e009243c3941d6de85cd1eb529514441b599ff7f6ef WatchSource:0}: Error finding container e5d19f95f094f954edfd1e009243c3941d6de85cd1eb529514441b599ff7f6ef: Status 404 returned error can't find the container with id e5d19f95f094f954edfd1e009243c3941d6de85cd1eb529514441b599ff7f6ef Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.558492 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-76c896469f-lflf2"] Jan 29 15:30:31 crc kubenswrapper[4757]: W0129 15:30:31.561298 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedc8f287_a4c1_4558_b279_5159e135e838.slice/crio-0c044342537d0f6ef8bef3f88600d5a96dca0817782ef9b6fc0fe3a595cdb524 WatchSource:0}: Error finding container 0c044342537d0f6ef8bef3f88600d5a96dca0817782ef9b6fc0fe3a595cdb524: Status 404 returned error can't find the container with id 0c044342537d0f6ef8bef3f88600d5a96dca0817782ef9b6fc0fe3a595cdb524 Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.569673 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2"] Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.755444 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj\" (UID: \"5297dfef-4739-4076-99f2-462bf83c4b4b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.756015 4757 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.756100 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert podName:5297dfef-4739-4076-99f2-462bf83c4b4b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:33.756082119 +0000 UTC m=+1197.045332356 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" (UID: "5297dfef-4739-4076-99f2-462bf83c4b4b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.788707 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd"] Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.825324 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn"] Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.840758 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs"] Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.857028 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k"] Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.867477 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f"] Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.877215 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh"] Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.887101 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr"] Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.899346 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr"] Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.899408 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq"] Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.912592 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks"] Jan 29 15:30:31 crc kubenswrapper[4757]: W0129 15:30:31.918107 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5549d49_38a8_4441_8200_6381ddf682b6.slice/crio-548ea1a9a9645fd7b6d64f9b416ee5c4c3e65ae34a22987c0c41546328358b8d WatchSource:0}: Error finding container 548ea1a9a9645fd7b6d64f9b416ee5c4c3e65ae34a22987c0c41546328358b8d: Status 404 returned error can't find the container with id 548ea1a9a9645fd7b6d64f9b416ee5c4c3e65ae34a22987c0c41546328358b8d Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.921336 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl"] Jan 29 15:30:31 crc kubenswrapper[4757]: W0129 15:30:31.923196 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0971e983_bccd_421c_8171_212672e8b8b7.slice/crio-fd6d00a7677c14b5fe02545a0d7ad843c0a075c4e4b6d09b767cf581324c88ff WatchSource:0}: Error finding container fd6d00a7677c14b5fe02545a0d7ad843c0a075c4e4b6d09b767cf581324c88ff: Status 404 returned error can't find the container with id fd6d00a7677c14b5fe02545a0d7ad843c0a075c4e4b6d09b767cf581324c88ff Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.923334 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l" event={"ID":"dc003609-336a-4cc2-a0fa-e3cd693a803d","Type":"ContainerStarted","Data":"75a4ec8b06f56ce484fc50dc13bc7695782795e148d4a84b0e9eb17c54a38ab5"} Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.928878 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x"] Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.929637 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2" event={"ID":"edc8f287-a4c1-4558-b279-5159e135e838","Type":"ContainerStarted","Data":"0c044342537d0f6ef8bef3f88600d5a96dca0817782ef9b6fc0fe3a595cdb524"} Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.932923 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b7w4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-dmc9f_openstack-operators(0971e983-bccd-421c-8171-212672e8b8b7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.933244 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" event={"ID":"0ae0f41a-2010-4578-a849-a47110a5cad7","Type":"ContainerStarted","Data":"e6b751a7fb1fb57f739e6768b34c0d26961e9525c2accd2c01762143338cfd77"} Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.933725 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2"] Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.934021 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" podUID="0971e983-bccd-421c-8171-212672e8b8b7" Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.934596 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46" event={"ID":"635077c8-931b-4bda-b7dc-117279b97a5e","Type":"ContainerStarted","Data":"9e28c0b6d4343421089b719e9e9661878b71b665ef77129d6c67c2032cdccc9b"} Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.936509 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/glance-operator@sha256:ebb3f9f6e871da3fdfdefdf4040964abcdc5f4c7dac961a27c85a80f37866f00,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cmh9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-f8c4db9df-76jqr_openstack-operators(dc96ab98-0882-4c4c-8011-642f5da0ce8d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.936595 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" event={"ID":"629b88f8-504a-4e19-914a-7359c131deb2","Type":"ContainerStarted","Data":"25fd6b9fe7c1add91b760b8f7df27945f1b8e788926afcd5e224c7d4e8b7def2"} Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.937713 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" podUID="dc96ab98-0882-4c4c-8011-642f5da0ce8d" Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.940558 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd" event={"ID":"0180bde3-8b8c-4ffe-a5d2-cc39199feb28","Type":"ContainerStarted","Data":"5fcfceec6253bdfb34edf069f71362408d56f340e5b33f7ced4bc0003a5d4407"} Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.942948 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr" event={"ID":"1373c007-6220-40ca-a9a7-176d6779ff9e","Type":"ContainerStarted","Data":"9fe7a7bc0ff0d29f7359e1ebf69f2fe622220758b3810602935b9addab9a3a37"} Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.944465 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm" event={"ID":"e9b2ed23-04f3-479f-870f-10f54f6ecab9","Type":"ContainerStarted","Data":"e5d19f95f094f954edfd1e009243c3941d6de85cd1eb529514441b599ff7f6ef"} Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.945548 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn" event={"ID":"a921cf1b-0823-487b-9b4f-eb7eefca9cb5","Type":"ContainerStarted","Data":"f8d86ac06d4b0d00ac2597efc10b85322c2a33829ad464e08e2016a01550eb2d"} Jan 29 15:30:31 crc kubenswrapper[4757]: I0129 15:30:31.947083 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-76c896469f-lflf2" event={"ID":"d75a2490-77f1-41f0-b9c5-efcc7a2e520c","Type":"ContainerStarted","Data":"7c1f790e631935c1a5381b89dc78b8357feb101ecd134a3c6af4d11931bbcbe6"} Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.989299 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:d5166d67cfb571a8b84635a479d0fada7a1f0698ebf1549b7e55e6689e4ecb48,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54n8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-866c9d5b98-tbvmq_openstack-operators(eb034926-25ee-4735-a9c4-407c7cd152a4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.992471 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" podUID="eb034926-25ee-4735-a9c4-407c7cd152a4" Jan 29 15:30:31 crc kubenswrapper[4757]: W0129 15:30:31.994977 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5590a40a_b378_4912_881d_68b46fb6564d.slice/crio-d0c9f99e25c872804ba8b35615673b864795809a76cdf80883c9d8806b08cefc WatchSource:0}: Error finding container d0c9f99e25c872804ba8b35615673b864795809a76cdf80883c9d8806b08cefc: Status 404 returned error can't find the container with id d0c9f99e25c872804ba8b35615673b864795809a76cdf80883c9d8806b08cefc Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.996826 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fljvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fdwks_openstack-operators(2c9cefc6-204f-42c8-b7a6-2c2776617a58): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.997685 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l5xnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-82wnl_openstack-operators(c7d33f5e-ce62-40e5-9400-c28c1cb50753): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.997959 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks" podUID="2c9cefc6-204f-42c8-b7a6-2c2776617a58" Jan 29 15:30:31 crc kubenswrapper[4757]: E0129 15:30:31.999072 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" podUID="c7d33f5e-ce62-40e5-9400-c28c1cb50753" Jan 29 15:30:32 crc kubenswrapper[4757]: E0129 15:30:32.002642 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:61e700ea66730db00f31cb2a89fcd49bb919f246027c414e509166c1cab8429c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9688,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68f8cb846c-kng6x_openstack-operators(5590a40a-b378-4912-881d-68b46fb6564d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:30:32 crc kubenswrapper[4757]: E0129 15:30:32.003744 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" podUID="5590a40a-b378-4912-881d-68b46fb6564d" Jan 29 15:30:32 crc kubenswrapper[4757]: I0129 15:30:32.263984 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:32 crc kubenswrapper[4757]: I0129 15:30:32.264116 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:32 crc kubenswrapper[4757]: E0129 15:30:32.264323 4757 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:30:32 crc kubenswrapper[4757]: E0129 15:30:32.264385 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs podName:e25703a2-f64f-43ff-b95f-3c9640fd9029 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:34.264365423 +0000 UTC m=+1197.553615670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs") pod "openstack-operator-controller-manager-5cbc58956b-jn7tc" (UID: "e25703a2-f64f-43ff-b95f-3c9640fd9029") : secret "webhook-server-cert" not found Jan 29 15:30:32 crc kubenswrapper[4757]: E0129 15:30:32.264471 4757 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:30:32 crc kubenswrapper[4757]: E0129 15:30:32.264613 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs podName:e25703a2-f64f-43ff-b95f-3c9640fd9029 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:34.264581219 +0000 UTC m=+1197.553831456 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs") pod "openstack-operator-controller-manager-5cbc58956b-jn7tc" (UID: "e25703a2-f64f-43ff-b95f-3c9640fd9029") : secret "metrics-server-cert" not found Jan 29 15:30:32 crc kubenswrapper[4757]: I0129 15:30:32.955464 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" event={"ID":"dc96ab98-0882-4c4c-8011-642f5da0ce8d","Type":"ContainerStarted","Data":"819c1f49caaf102d48abb0eddc10968a0ba47533090facbfae19270bbcd0be04"} Jan 29 15:30:32 crc kubenswrapper[4757]: E0129 15:30:32.958310 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/glance-operator@sha256:ebb3f9f6e871da3fdfdefdf4040964abcdc5f4c7dac961a27c85a80f37866f00\\\"\"" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" podUID="dc96ab98-0882-4c4c-8011-642f5da0ce8d" Jan 29 15:30:32 crc kubenswrapper[4757]: I0129 15:30:32.963316 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2" event={"ID":"a5549d49-38a8-4441-8200-6381ddf682b6","Type":"ContainerStarted","Data":"548ea1a9a9645fd7b6d64f9b416ee5c4c3e65ae34a22987c0c41546328358b8d"} Jan 29 15:30:32 crc kubenswrapper[4757]: I0129 15:30:32.964932 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" event={"ID":"eb034926-25ee-4735-a9c4-407c7cd152a4","Type":"ContainerStarted","Data":"9d6d8525aa21503855e9fd21395b9ec25a474413f004e3d2725bca013897f379"} Jan 29 15:30:32 crc kubenswrapper[4757]: I0129 15:30:32.965958 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks" event={"ID":"2c9cefc6-204f-42c8-b7a6-2c2776617a58","Type":"ContainerStarted","Data":"fa4821765192447d74174858a5728774f77d1df67e76f5fe6a97d5d87099b845"} Jan 29 15:30:32 crc kubenswrapper[4757]: E0129 15:30:32.966825 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:d5166d67cfb571a8b84635a479d0fada7a1f0698ebf1549b7e55e6689e4ecb48\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" podUID="eb034926-25ee-4735-a9c4-407c7cd152a4" Jan 29 15:30:32 crc kubenswrapper[4757]: E0129 15:30:32.969067 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks" podUID="2c9cefc6-204f-42c8-b7a6-2c2776617a58" Jan 29 15:30:32 crc kubenswrapper[4757]: I0129 15:30:32.970495 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs" event={"ID":"4ab1a5d0-6fc4-4081-85d6-047635db038e","Type":"ContainerStarted","Data":"0c59bd7969688222e518c5ffc130f42b939270ba4c3b9be3a81fef10babe071c"} Jan 29 15:30:32 crc kubenswrapper[4757]: I0129 15:30:32.971570 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" event={"ID":"c7d33f5e-ce62-40e5-9400-c28c1cb50753","Type":"ContainerStarted","Data":"4ed12f67ca933c6826dc40d57dca47107ba6c31764bcf62a40684f2271f868b2"} Jan 29 15:30:32 crc kubenswrapper[4757]: I0129 15:30:32.973918 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" event={"ID":"5590a40a-b378-4912-881d-68b46fb6564d","Type":"ContainerStarted","Data":"d0c9f99e25c872804ba8b35615673b864795809a76cdf80883c9d8806b08cefc"} Jan 29 15:30:32 crc kubenswrapper[4757]: E0129 15:30:32.974018 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" podUID="c7d33f5e-ce62-40e5-9400-c28c1cb50753" Jan 29 15:30:32 crc kubenswrapper[4757]: E0129 15:30:32.981304 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:61e700ea66730db00f31cb2a89fcd49bb919f246027c414e509166c1cab8429c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" podUID="5590a40a-b378-4912-881d-68b46fb6564d" Jan 29 15:30:32 crc kubenswrapper[4757]: I0129 15:30:32.981477 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh" event={"ID":"fd851f0e-29f7-44b9-8c6e-f3b66a90c6b6","Type":"ContainerStarted","Data":"2e90400e182bba79290d87116341d4af59a6df6dfb7fef3217f915e5debe8d09"} Jan 29 15:30:32 crc kubenswrapper[4757]: I0129 15:30:32.986524 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" event={"ID":"0971e983-bccd-421c-8171-212672e8b8b7","Type":"ContainerStarted","Data":"fd6d00a7677c14b5fe02545a0d7ad843c0a075c4e4b6d09b767cf581324c88ff"} Jan 29 15:30:32 crc kubenswrapper[4757]: E0129 15:30:32.989400 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" podUID="0971e983-bccd-421c-8171-212672e8b8b7" Jan 29 15:30:33 crc kubenswrapper[4757]: I0129 15:30:33.279103 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert\") pod \"infra-operator-controller-manager-79955696d6-qzz5n\" (UID: \"5d2d32e1-adbe-4b24-bd98-0e51a52283f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:30:33 crc kubenswrapper[4757]: E0129 15:30:33.279320 4757 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:30:33 crc kubenswrapper[4757]: E0129 15:30:33.279376 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert podName:5d2d32e1-adbe-4b24-bd98-0e51a52283f5 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:37.279358177 +0000 UTC m=+1200.568608414 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert") pod "infra-operator-controller-manager-79955696d6-qzz5n" (UID: "5d2d32e1-adbe-4b24-bd98-0e51a52283f5") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:30:33 crc kubenswrapper[4757]: I0129 15:30:33.785256 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj\" (UID: \"5297dfef-4739-4076-99f2-462bf83c4b4b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:30:33 crc kubenswrapper[4757]: E0129 15:30:33.785419 4757 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:30:33 crc kubenswrapper[4757]: E0129 15:30:33.785481 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert podName:5297dfef-4739-4076-99f2-462bf83c4b4b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:37.785463699 +0000 UTC m=+1201.074713936 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" (UID: "5297dfef-4739-4076-99f2-462bf83c4b4b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:30:33 crc kubenswrapper[4757]: E0129 15:30:33.995158 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:d5166d67cfb571a8b84635a479d0fada7a1f0698ebf1549b7e55e6689e4ecb48\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" podUID="eb034926-25ee-4735-a9c4-407c7cd152a4" Jan 29 15:30:33 crc kubenswrapper[4757]: E0129 15:30:33.995546 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" podUID="c7d33f5e-ce62-40e5-9400-c28c1cb50753" Jan 29 15:30:33 crc kubenswrapper[4757]: E0129 15:30:33.995583 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks" podUID="2c9cefc6-204f-42c8-b7a6-2c2776617a58" Jan 29 15:30:33 crc kubenswrapper[4757]: E0129 15:30:33.995952 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:61e700ea66730db00f31cb2a89fcd49bb919f246027c414e509166c1cab8429c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" podUID="5590a40a-b378-4912-881d-68b46fb6564d" Jan 29 15:30:33 crc kubenswrapper[4757]: E0129 15:30:33.996225 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" podUID="0971e983-bccd-421c-8171-212672e8b8b7" Jan 29 15:30:33 crc kubenswrapper[4757]: E0129 15:30:33.998604 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/glance-operator@sha256:ebb3f9f6e871da3fdfdefdf4040964abcdc5f4c7dac961a27c85a80f37866f00\\\"\"" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" podUID="dc96ab98-0882-4c4c-8011-642f5da0ce8d" Jan 29 15:30:34 crc kubenswrapper[4757]: I0129 15:30:34.293516 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:34 crc kubenswrapper[4757]: I0129 15:30:34.293906 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:34 crc kubenswrapper[4757]: E0129 15:30:34.294056 4757 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:30:34 crc kubenswrapper[4757]: E0129 15:30:34.294114 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs podName:e25703a2-f64f-43ff-b95f-3c9640fd9029 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:38.294098074 +0000 UTC m=+1201.583348311 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs") pod "openstack-operator-controller-manager-5cbc58956b-jn7tc" (UID: "e25703a2-f64f-43ff-b95f-3c9640fd9029") : secret "webhook-server-cert" not found Jan 29 15:30:34 crc kubenswrapper[4757]: E0129 15:30:34.294237 4757 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:30:34 crc kubenswrapper[4757]: E0129 15:30:34.294411 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs podName:e25703a2-f64f-43ff-b95f-3c9640fd9029 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:38.294384902 +0000 UTC m=+1201.583635169 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs") pod "openstack-operator-controller-manager-5cbc58956b-jn7tc" (UID: "e25703a2-f64f-43ff-b95f-3c9640fd9029") : secret "metrics-server-cert" not found Jan 29 15:30:37 crc kubenswrapper[4757]: I0129 15:30:37.342139 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert\") pod \"infra-operator-controller-manager-79955696d6-qzz5n\" (UID: \"5d2d32e1-adbe-4b24-bd98-0e51a52283f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:30:37 crc kubenswrapper[4757]: E0129 15:30:37.342590 4757 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:30:37 crc kubenswrapper[4757]: E0129 15:30:37.342647 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert podName:5d2d32e1-adbe-4b24-bd98-0e51a52283f5 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:45.342629789 +0000 UTC m=+1208.631880026 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert") pod "infra-operator-controller-manager-79955696d6-qzz5n" (UID: "5d2d32e1-adbe-4b24-bd98-0e51a52283f5") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:30:37 crc kubenswrapper[4757]: I0129 15:30:37.856235 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj\" (UID: \"5297dfef-4739-4076-99f2-462bf83c4b4b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:30:37 crc kubenswrapper[4757]: E0129 15:30:37.856436 4757 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:30:37 crc kubenswrapper[4757]: E0129 15:30:37.856751 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert podName:5297dfef-4739-4076-99f2-462bf83c4b4b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:45.856711212 +0000 UTC m=+1209.145961449 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" (UID: "5297dfef-4739-4076-99f2-462bf83c4b4b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:30:38 crc kubenswrapper[4757]: I0129 15:30:38.360461 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:38 crc kubenswrapper[4757]: I0129 15:30:38.360563 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:38 crc kubenswrapper[4757]: E0129 15:30:38.360621 4757 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:30:38 crc kubenswrapper[4757]: E0129 15:30:38.360684 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs podName:e25703a2-f64f-43ff-b95f-3c9640fd9029 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:46.360667021 +0000 UTC m=+1209.649917248 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs") pod "openstack-operator-controller-manager-5cbc58956b-jn7tc" (UID: "e25703a2-f64f-43ff-b95f-3c9640fd9029") : secret "metrics-server-cert" not found Jan 29 15:30:38 crc kubenswrapper[4757]: E0129 15:30:38.360819 4757 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:30:38 crc kubenswrapper[4757]: E0129 15:30:38.360903 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs podName:e25703a2-f64f-43ff-b95f-3c9640fd9029 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:46.360892158 +0000 UTC m=+1209.650142395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs") pod "openstack-operator-controller-manager-5cbc58956b-jn7tc" (UID: "e25703a2-f64f-43ff-b95f-3c9640fd9029") : secret "webhook-server-cert" not found Jan 29 15:30:45 crc kubenswrapper[4757]: I0129 15:30:45.415676 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert\") pod \"infra-operator-controller-manager-79955696d6-qzz5n\" (UID: \"5d2d32e1-adbe-4b24-bd98-0e51a52283f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:30:45 crc kubenswrapper[4757]: E0129 15:30:45.416081 4757 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:30:45 crc kubenswrapper[4757]: E0129 15:30:45.416130 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert podName:5d2d32e1-adbe-4b24-bd98-0e51a52283f5 nodeName:}" failed. No retries permitted until 2026-01-29 15:31:01.416114664 +0000 UTC m=+1224.705364901 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert") pod "infra-operator-controller-manager-79955696d6-qzz5n" (UID: "5d2d32e1-adbe-4b24-bd98-0e51a52283f5") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:30:45 crc kubenswrapper[4757]: I0129 15:30:45.920118 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj\" (UID: \"5297dfef-4739-4076-99f2-462bf83c4b4b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:30:45 crc kubenswrapper[4757]: E0129 15:30:45.920319 4757 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:30:45 crc kubenswrapper[4757]: E0129 15:30:45.920421 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert podName:5297dfef-4739-4076-99f2-462bf83c4b4b nodeName:}" failed. No retries permitted until 2026-01-29 15:31:01.920401163 +0000 UTC m=+1225.209651400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" (UID: "5297dfef-4739-4076-99f2-462bf83c4b4b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:30:46 crc kubenswrapper[4757]: I0129 15:30:46.440869 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:46 crc kubenswrapper[4757]: I0129 15:30:46.440950 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:30:46 crc kubenswrapper[4757]: E0129 15:30:46.441038 4757 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:30:46 crc kubenswrapper[4757]: E0129 15:30:46.441082 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs podName:e25703a2-f64f-43ff-b95f-3c9640fd9029 nodeName:}" failed. No retries permitted until 2026-01-29 15:31:02.441068676 +0000 UTC m=+1225.730318913 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs") pod "openstack-operator-controller-manager-5cbc58956b-jn7tc" (UID: "e25703a2-f64f-43ff-b95f-3c9640fd9029") : secret "webhook-server-cert" not found Jan 29 15:30:46 crc kubenswrapper[4757]: E0129 15:30:46.441179 4757 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:30:46 crc kubenswrapper[4757]: E0129 15:30:46.441202 4757 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs podName:e25703a2-f64f-43ff-b95f-3c9640fd9029 nodeName:}" failed. No retries permitted until 2026-01-29 15:31:02.44119606 +0000 UTC m=+1225.730446297 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs") pod "openstack-operator-controller-manager-5cbc58956b-jn7tc" (UID: "e25703a2-f64f-43ff-b95f-3c9640fd9029") : secret "metrics-server-cert" not found Jan 29 15:30:48 crc kubenswrapper[4757]: I0129 15:30:48.399070 4757 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:30:48 crc kubenswrapper[4757]: E0129 15:30:48.838008 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/cinder-operator@sha256:9a564938039ddc2270feaa565a444c70c1d0d55906006ea88830f48cd4ed862b" Jan 29 15:30:48 crc kubenswrapper[4757]: E0129 15:30:48.838198 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/cinder-operator@sha256:9a564938039ddc2270feaa565a444c70c1d0d55906006ea88830f48cd4ed862b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gbnb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-858d89fd-hf2f8_openstack-operators(2db120e3-48a1-46c6-9d75-9e60012dcff4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:30:48 crc kubenswrapper[4757]: E0129 15:30:48.839484 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8" podUID="2db120e3-48a1-46c6-9d75-9e60012dcff4" Jan 29 15:30:49 crc kubenswrapper[4757]: E0129 15:30:49.113386 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/cinder-operator@sha256:9a564938039ddc2270feaa565a444c70c1d0d55906006ea88830f48cd4ed862b\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8" podUID="2db120e3-48a1-46c6-9d75-9e60012dcff4" Jan 29 15:30:49 crc kubenswrapper[4757]: E0129 15:30:49.135373 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 29 15:30:49 crc kubenswrapper[4757]: E0129 15:30:49.135599 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7m6rl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-s5px2_openstack-operators(edc8f287-a4c1-4558-b279-5159e135e838): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:30:49 crc kubenswrapper[4757]: E0129 15:30:49.136778 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2" podUID="edc8f287-a4c1-4558-b279-5159e135e838" Jan 29 15:30:50 crc kubenswrapper[4757]: E0129 15:30:50.117521 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2" podUID="edc8f287-a4c1-4558-b279-5159e135e838" Jan 29 15:30:50 crc kubenswrapper[4757]: E0129 15:30:50.179668 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/manila-operator@sha256:485c920b3385679f1df13ba46707c204b4212ea23621cbc75b44c062da20e495" Jan 29 15:30:50 crc kubenswrapper[4757]: E0129 15:30:50.179897 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:485c920b3385679f1df13ba46707c204b4212ea23621cbc75b44c062da20e495,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dd9wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-76c896469f-lflf2_openstack-operators(d75a2490-77f1-41f0-b9c5-efcc7a2e520c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:30:50 crc kubenswrapper[4757]: E0129 15:30:50.181116 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-76c896469f-lflf2" podUID="d75a2490-77f1-41f0-b9c5-efcc7a2e520c" Jan 29 15:30:51 crc kubenswrapper[4757]: E0129 15:30:51.122138 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:485c920b3385679f1df13ba46707c204b4212ea23621cbc75b44c062da20e495\\\"\"" pod="openstack-operators/manila-operator-controller-manager-76c896469f-lflf2" podUID="d75a2490-77f1-41f0-b9c5-efcc7a2e520c" Jan 29 15:30:54 crc kubenswrapper[4757]: E0129 15:30:54.400568 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8" Jan 29 15:30:54 crc kubenswrapper[4757]: E0129 15:30:54.401101 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljjhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6f7455757b-zfvjn_openstack-operators(a921cf1b-0823-487b-9b4f-eb7eefca9cb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:30:54 crc kubenswrapper[4757]: E0129 15:30:54.402240 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn" podUID="a921cf1b-0823-487b-9b4f-eb7eefca9cb5" Jan 29 15:30:55 crc kubenswrapper[4757]: E0129 15:30:55.146520 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn" podUID="a921cf1b-0823-487b-9b4f-eb7eefca9cb5" Jan 29 15:30:56 crc kubenswrapper[4757]: E0129 15:30:56.738190 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.198:5001/openstack-k8s-operators/barbican-operator:ec49646f454ddfdb90ed665057bdbf99c4d6a382" Jan 29 15:30:56 crc kubenswrapper[4757]: E0129 15:30:56.738256 4757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.198:5001/openstack-k8s-operators/barbican-operator:ec49646f454ddfdb90ed665057bdbf99c4d6a382" Jan 29 15:30:56 crc kubenswrapper[4757]: E0129 15:30:56.738427 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.198:5001/openstack-k8s-operators/barbican-operator:ec49646f454ddfdb90ed665057bdbf99c4d6a382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gr4zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-79f547bdd5-7bg8k_openstack-operators(629b88f8-504a-4e19-914a-7359c131deb2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:30:56 crc kubenswrapper[4757]: E0129 15:30:56.739672 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" podUID="629b88f8-504a-4e19-914a-7359c131deb2" Jan 29 15:30:57 crc kubenswrapper[4757]: E0129 15:30:57.163308 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.198:5001/openstack-k8s-operators/barbican-operator:ec49646f454ddfdb90ed665057bdbf99c4d6a382\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" podUID="629b88f8-504a-4e19-914a-7359c131deb2" Jan 29 15:31:00 crc kubenswrapper[4757]: E0129 15:31:00.554260 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:1567ac98879f64271365fe819b1daeada2e65e56dc713a23e27faeb09e4a8889" Jan 29 15:31:00 crc kubenswrapper[4757]: E0129 15:31:00.554507 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:1567ac98879f64271365fe819b1daeada2e65e56dc713a23e27faeb09e4a8889,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mwwk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c7cc6ff45-gpkbd_openstack-operators(0180bde3-8b8c-4ffe-a5d2-cc39199feb28): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:00 crc kubenswrapper[4757]: E0129 15:31:00.555753 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd" podUID="0180bde3-8b8c-4ffe-a5d2-cc39199feb28" Jan 29 15:31:01 crc kubenswrapper[4757]: E0129 15:31:01.060239 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 29 15:31:01 crc kubenswrapper[4757]: E0129 15:31:01.060456 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lnfxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-2kd46_openstack-operators(635077c8-931b-4bda-b7dc-117279b97a5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:01 crc kubenswrapper[4757]: E0129 15:31:01.062046 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46" podUID="635077c8-931b-4bda-b7dc-117279b97a5e" Jan 29 15:31:01 crc kubenswrapper[4757]: E0129 15:31:01.140278 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/heat-operator@sha256:af2d94d0cba25ca19e514a5213b872809ed4cb7fab47a87d4403010415b3471e" Jan 29 15:31:01 crc kubenswrapper[4757]: E0129 15:31:01.140443 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/heat-operator@sha256:af2d94d0cba25ca19e514a5213b872809ed4cb7fab47a87d4403010415b3471e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-52gz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-d8b84fbc-qrdfv_openstack-operators(0ae0f41a-2010-4578-a849-a47110a5cad7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:01 crc kubenswrapper[4757]: E0129 15:31:01.141610 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" podUID="0ae0f41a-2010-4578-a849-a47110a5cad7" Jan 29 15:31:01 crc kubenswrapper[4757]: E0129 15:31:01.188414 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:1567ac98879f64271365fe819b1daeada2e65e56dc713a23e27faeb09e4a8889\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd" podUID="0180bde3-8b8c-4ffe-a5d2-cc39199feb28" Jan 29 15:31:01 crc kubenswrapper[4757]: E0129 15:31:01.188452 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:af2d94d0cba25ca19e514a5213b872809ed4cb7fab47a87d4403010415b3471e\\\"\"" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" podUID="0ae0f41a-2010-4578-a849-a47110a5cad7" Jan 29 15:31:01 crc kubenswrapper[4757]: E0129 15:31:01.189543 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46" podUID="635077c8-931b-4bda-b7dc-117279b97a5e" Jan 29 15:31:01 crc kubenswrapper[4757]: I0129 15:31:01.462483 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert\") pod \"infra-operator-controller-manager-79955696d6-qzz5n\" (UID: \"5d2d32e1-adbe-4b24-bd98-0e51a52283f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:31:01 crc kubenswrapper[4757]: I0129 15:31:01.469414 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d2d32e1-adbe-4b24-bd98-0e51a52283f5-cert\") pod \"infra-operator-controller-manager-79955696d6-qzz5n\" (UID: \"5d2d32e1-adbe-4b24-bd98-0e51a52283f5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:31:01 crc kubenswrapper[4757]: I0129 15:31:01.610711 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2jhrc" Jan 29 15:31:01 crc kubenswrapper[4757]: I0129 15:31:01.618462 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:31:01 crc kubenswrapper[4757]: I0129 15:31:01.978450 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj\" (UID: \"5297dfef-4739-4076-99f2-462bf83c4b4b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:31:01 crc kubenswrapper[4757]: I0129 15:31:01.986094 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5297dfef-4739-4076-99f2-462bf83c4b4b-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj\" (UID: \"5297dfef-4739-4076-99f2-462bf83c4b4b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:31:02 crc kubenswrapper[4757]: I0129 15:31:02.194599 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-z778s" Jan 29 15:31:02 crc kubenswrapper[4757]: I0129 15:31:02.201949 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:31:02 crc kubenswrapper[4757]: E0129 15:31:02.252150 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/watcher-operator@sha256:d23c69ab5c7d6c649fe9e23db98eae9b9de8dce4f4901511b2b764dd366d7c2c" Jan 29 15:31:02 crc kubenswrapper[4757]: E0129 15:31:02.252384 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:d23c69ab5c7d6c649fe9e23db98eae9b9de8dce4f4901511b2b764dd366d7c2c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7x2gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-59f4c7d7c4-6z2bh_openstack-operators(fd851f0e-29f7-44b9-8c6e-f3b66a90c6b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:02 crc kubenswrapper[4757]: E0129 15:31:02.253617 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh" podUID="fd851f0e-29f7-44b9-8c6e-f3b66a90c6b6" Jan 29 15:31:02 crc kubenswrapper[4757]: I0129 15:31:02.485750 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:31:02 crc kubenswrapper[4757]: I0129 15:31:02.486242 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:31:02 crc kubenswrapper[4757]: I0129 15:31:02.492357 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-webhook-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:31:02 crc kubenswrapper[4757]: I0129 15:31:02.492618 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e25703a2-f64f-43ff-b95f-3c9640fd9029-metrics-certs\") pod \"openstack-operator-controller-manager-5cbc58956b-jn7tc\" (UID: \"e25703a2-f64f-43ff-b95f-3c9640fd9029\") " pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:31:02 crc kubenswrapper[4757]: I0129 15:31:02.646116 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-shpll" Jan 29 15:31:02 crc kubenswrapper[4757]: I0129 15:31:02.655328 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:31:03 crc kubenswrapper[4757]: E0129 15:31:03.197540 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:d23c69ab5c7d6c649fe9e23db98eae9b9de8dce4f4901511b2b764dd366d7c2c\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh" podUID="fd851f0e-29f7-44b9-8c6e-f3b66a90c6b6" Jan 29 15:31:09 crc kubenswrapper[4757]: E0129 15:31:09.486888 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:b0215a60bdcbb8ab35f163ea92a0d50c232e034969cdf47944bbe343671d84a9" Jan 29 15:31:09 crc kubenswrapper[4757]: E0129 15:31:09.487770 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:b0215a60bdcbb8ab35f163ea92a0d50c232e034969cdf47944bbe343671d84a9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b9xgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-dd77988f8-h7w6l_openstack-operators(dc003609-336a-4cc2-a0fa-e3cd693a803d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:09 crc kubenswrapper[4757]: E0129 15:31:09.488983 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l" podUID="dc003609-336a-4cc2-a0fa-e3cd693a803d" Jan 29 15:31:10 crc kubenswrapper[4757]: E0129 15:31:10.139914 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/telemetry-operator@sha256:ee0236c7a8c8383b0a633b6f6e5f31200462ba68a51c45362836014c08c0c976" Jan 29 15:31:10 crc kubenswrapper[4757]: E0129 15:31:10.140463 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:ee0236c7a8c8383b0a633b6f6e5f31200462ba68a51c45362836014c08c0c976,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sb8hq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6cf8c44c7-grncr_openstack-operators(1373c007-6220-40ca-a9a7-176d6779ff9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:10 crc kubenswrapper[4757]: E0129 15:31:10.141736 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr" podUID="1373c007-6220-40ca-a9a7-176d6779ff9e" Jan 29 15:31:10 crc kubenswrapper[4757]: E0129 15:31:10.243472 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:ee0236c7a8c8383b0a633b6f6e5f31200462ba68a51c45362836014c08c0c976\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr" podUID="1373c007-6220-40ca-a9a7-176d6779ff9e" Jan 29 15:31:10 crc kubenswrapper[4757]: E0129 15:31:10.244221 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:b0215a60bdcbb8ab35f163ea92a0d50c232e034969cdf47944bbe343671d84a9\\\"\"" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l" podUID="dc003609-336a-4cc2-a0fa-e3cd693a803d" Jan 29 15:31:10 crc kubenswrapper[4757]: E0129 15:31:10.642735 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 29 15:31:10 crc kubenswrapper[4757]: E0129 15:31:10.642928 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nmtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-2zgqs_openstack-operators(4ab1a5d0-6fc4-4081-85d6-047635db038e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:10 crc kubenswrapper[4757]: E0129 15:31:10.644174 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs" podUID="4ab1a5d0-6fc4-4081-85d6-047635db038e" Jan 29 15:31:11 crc kubenswrapper[4757]: E0129 15:31:11.252250 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs" podUID="4ab1a5d0-6fc4-4081-85d6-047635db038e" Jan 29 15:31:12 crc kubenswrapper[4757]: E0129 15:31:12.209978 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:902a4578cd72634f02778ebeb05d5c76cda3c1275ebb51f2c4e042eda9f17a3b" Jan 29 15:31:12 crc kubenswrapper[4757]: E0129 15:31:12.210164 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:902a4578cd72634f02778ebeb05d5c76cda3c1275ebb51f2c4e042eda9f17a3b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kd5h7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-8ccc8547b-jh2fm_openstack-operators(e9b2ed23-04f3-479f-870f-10f54f6ecab9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:12 crc kubenswrapper[4757]: E0129 15:31:12.211250 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm" podUID="e9b2ed23-04f3-479f-870f-10f54f6ecab9" Jan 29 15:31:12 crc kubenswrapper[4757]: E0129 15:31:12.258141 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:902a4578cd72634f02778ebeb05d5c76cda3c1275ebb51f2c4e042eda9f17a3b\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm" podUID="e9b2ed23-04f3-479f-870f-10f54f6ecab9" Jan 29 15:31:12 crc kubenswrapper[4757]: E0129 15:31:12.732720 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 29 15:31:12 crc kubenswrapper[4757]: E0129 15:31:12.732954 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b7w4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-dmc9f_openstack-operators(0971e983-bccd-421c-8171-212672e8b8b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:12 crc kubenswrapper[4757]: E0129 15:31:12.734145 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" podUID="0971e983-bccd-421c-8171-212672e8b8b7" Jan 29 15:31:13 crc kubenswrapper[4757]: E0129 15:31:13.625005 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/glance-operator@sha256:ebb3f9f6e871da3fdfdefdf4040964abcdc5f4c7dac961a27c85a80f37866f00" Jan 29 15:31:13 crc kubenswrapper[4757]: E0129 15:31:13.625681 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/glance-operator@sha256:ebb3f9f6e871da3fdfdefdf4040964abcdc5f4c7dac961a27c85a80f37866f00,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cmh9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-f8c4db9df-76jqr_openstack-operators(dc96ab98-0882-4c4c-8011-642f5da0ce8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:13 crc kubenswrapper[4757]: E0129 15:31:13.627087 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" podUID="dc96ab98-0882-4c4c-8011-642f5da0ce8d" Jan 29 15:31:14 crc kubenswrapper[4757]: E0129 15:31:14.854510 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 29 15:31:14 crc kubenswrapper[4757]: E0129 15:31:14.854828 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l5xnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-82wnl_openstack-operators(c7d33f5e-ce62-40e5-9400-c28c1cb50753): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:14 crc kubenswrapper[4757]: E0129 15:31:14.856188 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" podUID="c7d33f5e-ce62-40e5-9400-c28c1cb50753" Jan 29 15:31:17 crc kubenswrapper[4757]: E0129 15:31:17.310875 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/ironic-operator@sha256:d5166d67cfb571a8b84635a479d0fada7a1f0698ebf1549b7e55e6689e4ecb48" Jan 29 15:31:17 crc kubenswrapper[4757]: E0129 15:31:17.311398 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:d5166d67cfb571a8b84635a479d0fada7a1f0698ebf1549b7e55e6689e4ecb48,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54n8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-866c9d5b98-tbvmq_openstack-operators(eb034926-25ee-4735-a9c4-407c7cd152a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:17 crc kubenswrapper[4757]: E0129 15:31:17.312568 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" podUID="eb034926-25ee-4735-a9c4-407c7cd152a4" Jan 29 15:31:24 crc kubenswrapper[4757]: E0129 15:31:24.984964 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" podUID="0971e983-bccd-421c-8171-212672e8b8b7" Jan 29 15:31:26 crc kubenswrapper[4757]: E0129 15:31:26.400653 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" podUID="c7d33f5e-ce62-40e5-9400-c28c1cb50753" Jan 29 15:31:26 crc kubenswrapper[4757]: E0129 15:31:26.403078 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/glance-operator@sha256:ebb3f9f6e871da3fdfdefdf4040964abcdc5f4c7dac961a27c85a80f37866f00\\\"\"" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" podUID="dc96ab98-0882-4c4c-8011-642f5da0ce8d" Jan 29 15:31:30 crc kubenswrapper[4757]: E0129 15:31:30.398106 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:d5166d67cfb571a8b84635a479d0fada7a1f0698ebf1549b7e55e6689e4ecb48\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" podUID="eb034926-25ee-4735-a9c4-407c7cd152a4" Jan 29 15:31:34 crc kubenswrapper[4757]: E0129 15:31:34.276208 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.198:5001/openstack-k8s-operators/barbican-operator:ec49646f454ddfdb90ed665057bdbf99c4d6a382" Jan 29 15:31:34 crc kubenswrapper[4757]: E0129 15:31:34.278409 4757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.198:5001/openstack-k8s-operators/barbican-operator:ec49646f454ddfdb90ed665057bdbf99c4d6a382" Jan 29 15:31:34 crc kubenswrapper[4757]: E0129 15:31:34.278838 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.198:5001/openstack-k8s-operators/barbican-operator:ec49646f454ddfdb90ed665057bdbf99c4d6a382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gr4zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-79f547bdd5-7bg8k_openstack-operators(629b88f8-504a-4e19-914a-7359c131deb2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:34 crc kubenswrapper[4757]: E0129 15:31:34.280772 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" podUID="629b88f8-504a-4e19-914a-7359c131deb2" Jan 29 15:31:35 crc kubenswrapper[4757]: E0129 15:31:35.347049 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/octavia-operator@sha256:61e700ea66730db00f31cb2a89fcd49bb919f246027c414e509166c1cab8429c" Jan 29 15:31:35 crc kubenswrapper[4757]: E0129 15:31:35.347677 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:61e700ea66730db00f31cb2a89fcd49bb919f246027c414e509166c1cab8429c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9688,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68f8cb846c-kng6x_openstack-operators(5590a40a-b378-4912-881d-68b46fb6564d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:35 crc kubenswrapper[4757]: E0129 15:31:35.349479 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" podUID="5590a40a-b378-4912-881d-68b46fb6564d" Jan 29 15:31:35 crc kubenswrapper[4757]: E0129 15:31:35.971771 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/heat-operator@sha256:af2d94d0cba25ca19e514a5213b872809ed4cb7fab47a87d4403010415b3471e" Jan 29 15:31:35 crc kubenswrapper[4757]: E0129 15:31:35.972144 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/heat-operator@sha256:af2d94d0cba25ca19e514a5213b872809ed4cb7fab47a87d4403010415b3471e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-52gz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-d8b84fbc-qrdfv_openstack-operators(0ae0f41a-2010-4578-a849-a47110a5cad7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:35 crc kubenswrapper[4757]: E0129 15:31:35.973814 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" podUID="0ae0f41a-2010-4578-a849-a47110a5cad7" Jan 29 15:31:36 crc kubenswrapper[4757]: E0129 15:31:36.344603 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:cabd70e99de91d2731cd76d71375b4d51ab37ed1116a8e9464551e19921c7c97" Jan 29 15:31:36 crc kubenswrapper[4757]: E0129 15:31:36.344867 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:cabd70e99de91d2731cd76d71375b4d51ab37ed1116a8e9464551e19921c7c97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w69fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-68cb478976-5rfk2_openstack-operators(a5549d49-38a8-4441-8200-6381ddf682b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:36 crc kubenswrapper[4757]: E0129 15:31:36.346133 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2" podUID="a5549d49-38a8-4441-8200-6381ddf682b6" Jan 29 15:31:36 crc kubenswrapper[4757]: E0129 15:31:36.482773 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:cabd70e99de91d2731cd76d71375b4d51ab37ed1116a8e9464551e19921c7c97\\\"\"" pod="openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2" podUID="a5549d49-38a8-4441-8200-6381ddf682b6" Jan 29 15:31:38 crc kubenswrapper[4757]: I0129 15:31:38.222946 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n"] Jan 29 15:31:38 crc kubenswrapper[4757]: I0129 15:31:38.296630 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj"] Jan 29 15:31:38 crc kubenswrapper[4757]: I0129 15:31:38.348897 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc"] Jan 29 15:31:42 crc kubenswrapper[4757]: E0129 15:31:42.237811 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 29 15:31:42 crc kubenswrapper[4757]: E0129 15:31:42.240778 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fljvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fdwks_openstack-operators(2c9cefc6-204f-42c8-b7a6-2c2776617a58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:31:42 crc kubenswrapper[4757]: E0129 15:31:42.242195 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks" podUID="2c9cefc6-204f-42c8-b7a6-2c2776617a58" Jan 29 15:31:42 crc kubenswrapper[4757]: I0129 15:31:42.487672 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" event={"ID":"e25703a2-f64f-43ff-b95f-3c9640fd9029","Type":"ContainerStarted","Data":"1f88814a557058c1033fb491d523a49e44ac3712fbeba817c063bf7a96834f06"} Jan 29 15:31:42 crc kubenswrapper[4757]: I0129 15:31:42.488676 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" event={"ID":"5297dfef-4739-4076-99f2-462bf83c4b4b","Type":"ContainerStarted","Data":"7db5e660821beb63dc96890cb742affb78d06860004c83242a3e2618a4948d9b"} Jan 29 15:31:42 crc kubenswrapper[4757]: I0129 15:31:42.490493 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" event={"ID":"5d2d32e1-adbe-4b24-bd98-0e51a52283f5","Type":"ContainerStarted","Data":"dd7530d7fef352b64958a54d185f60145dcaf2073f354f60e74c22d03c5bb7c3"} Jan 29 15:31:43 crc kubenswrapper[4757]: I0129 15:31:43.496945 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn" event={"ID":"a921cf1b-0823-487b-9b4f-eb7eefca9cb5","Type":"ContainerStarted","Data":"eca5ade0ddde72229ad3f5520c782e764e8be23267008db6e629817204f19390"} Jan 29 15:31:43 crc kubenswrapper[4757]: I0129 15:31:43.497471 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn" Jan 29 15:31:43 crc kubenswrapper[4757]: I0129 15:31:43.498490 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-76c896469f-lflf2" event={"ID":"d75a2490-77f1-41f0-b9c5-efcc7a2e520c","Type":"ContainerStarted","Data":"817f5e73d9e20fb8c710e9d807000fb4c8114294a2fb5aa87ef8be9cea42cc33"} Jan 29 15:31:43 crc kubenswrapper[4757]: I0129 15:31:43.498743 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-76c896469f-lflf2" Jan 29 15:31:43 crc kubenswrapper[4757]: I0129 15:31:43.499919 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8" event={"ID":"2db120e3-48a1-46c6-9d75-9e60012dcff4","Type":"ContainerStarted","Data":"ded757cddb162015c2b2593ffda3abe94f60f6faee66f9f67a93b9254e2aa5b0"} Jan 29 15:31:43 crc kubenswrapper[4757]: I0129 15:31:43.500124 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8" Jan 29 15:31:43 crc kubenswrapper[4757]: I0129 15:31:43.501633 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2" event={"ID":"edc8f287-a4c1-4558-b279-5159e135e838","Type":"ContainerStarted","Data":"ba302d94e5e5db3c5b7f44c7db7d1651401a6005870cc28ddd9d704f5f5b8a96"} Jan 29 15:31:43 crc kubenswrapper[4757]: I0129 15:31:43.501779 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2" Jan 29 15:31:43 crc kubenswrapper[4757]: I0129 15:31:43.522548 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn" podStartSLOduration=8.612769711 podStartE2EDuration="1m14.522530184s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.856553751 +0000 UTC m=+1195.145803988" lastFinishedPulling="2026-01-29 15:31:37.766314224 +0000 UTC m=+1261.055564461" observedRunningTime="2026-01-29 15:31:43.520499736 +0000 UTC m=+1266.809749983" watchObservedRunningTime="2026-01-29 15:31:43.522530184 +0000 UTC m=+1266.811780421" Jan 29 15:31:43 crc kubenswrapper[4757]: I0129 15:31:43.542208 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8" podStartSLOduration=7.639785965 podStartE2EDuration="1m14.542191624s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:30.864843792 +0000 UTC m=+1194.154094019" lastFinishedPulling="2026-01-29 15:31:37.767249441 +0000 UTC m=+1261.056499678" observedRunningTime="2026-01-29 15:31:43.537421566 +0000 UTC m=+1266.826671803" watchObservedRunningTime="2026-01-29 15:31:43.542191624 +0000 UTC m=+1266.831441861" Jan 29 15:31:43 crc kubenswrapper[4757]: I0129 15:31:43.553585 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2" podStartSLOduration=8.352603259 podStartE2EDuration="1m14.553563154s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.566448101 +0000 UTC m=+1194.855698338" lastFinishedPulling="2026-01-29 15:31:37.767407986 +0000 UTC m=+1261.056658233" observedRunningTime="2026-01-29 15:31:43.550196186 +0000 UTC m=+1266.839446423" watchObservedRunningTime="2026-01-29 15:31:43.553563154 +0000 UTC m=+1266.842813391" Jan 29 15:31:43 crc kubenswrapper[4757]: I0129 15:31:43.568181 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-76c896469f-lflf2" podStartSLOduration=8.362731063 podStartE2EDuration="1m14.568165147s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.562040544 +0000 UTC m=+1194.851290781" lastFinishedPulling="2026-01-29 15:31:37.767474588 +0000 UTC m=+1261.056724865" observedRunningTime="2026-01-29 15:31:43.567311453 +0000 UTC m=+1266.856561690" watchObservedRunningTime="2026-01-29 15:31:43.568165147 +0000 UTC m=+1266.857415384" Jan 29 15:31:47 crc kubenswrapper[4757]: E0129 15:31:47.612494 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.198:5001/openstack-k8s-operators/barbican-operator:ec49646f454ddfdb90ed665057bdbf99c4d6a382\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" podUID="629b88f8-504a-4e19-914a-7359c131deb2" Jan 29 15:31:49 crc kubenswrapper[4757]: E0129 15:31:49.536430 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:af2d94d0cba25ca19e514a5213b872809ed4cb7fab47a87d4403010415b3471e\\\"\"" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" podUID="0ae0f41a-2010-4578-a849-a47110a5cad7" Jan 29 15:31:49 crc kubenswrapper[4757]: I0129 15:31:49.677627 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-858d89fd-hf2f8" Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.069005 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-s5px2" Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.245128 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-76c896469f-lflf2" Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.545672 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-zfvjn" Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.553141 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l" event={"ID":"dc003609-336a-4cc2-a0fa-e3cd693a803d","Type":"ContainerStarted","Data":"9bafa3e7db3870012dc0f389094ef819c3eb2b01bcfaa3e854ab61400e0346c2"} Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.553783 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l" Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.554986 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46" event={"ID":"635077c8-931b-4bda-b7dc-117279b97a5e","Type":"ContainerStarted","Data":"8e334cb5632b82acae8cfe4f2098c47df0a6b102555abde4c8725a7bd8b14034"} Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.555416 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46" Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.556520 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh" event={"ID":"fd851f0e-29f7-44b9-8c6e-f3b66a90c6b6","Type":"ContainerStarted","Data":"e926079f5f6f593d4031424d0b9a5b767926e2f08b9ea18690b7fcb3b4498bac"} Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.556874 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh" Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.557915 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm" event={"ID":"e9b2ed23-04f3-479f-870f-10f54f6ecab9","Type":"ContainerStarted","Data":"a75308da15889491fcadf43b8e176ff20d116cddd1b4acb43e7d03180b3f9bf7"} Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.558237 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm" Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.569972 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" event={"ID":"e25703a2-f64f-43ff-b95f-3c9640fd9029","Type":"ContainerStarted","Data":"5826ff520d8e9fb6560dd828717db597c1fb51fc646a3968084998d5e7072e74"} Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.570137 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.612464 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46" podStartSLOduration=3.239426431 podStartE2EDuration="1m21.612442608s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.394825956 +0000 UTC m=+1194.684076193" lastFinishedPulling="2026-01-29 15:31:49.767842133 +0000 UTC m=+1273.057092370" observedRunningTime="2026-01-29 15:31:50.604657192 +0000 UTC m=+1273.893907449" watchObservedRunningTime="2026-01-29 15:31:50.612442608 +0000 UTC m=+1273.901692845" Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.642385 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm" podStartSLOduration=3.322272182 podStartE2EDuration="1m21.642366205s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.458604695 +0000 UTC m=+1194.747854932" lastFinishedPulling="2026-01-29 15:31:49.778698718 +0000 UTC m=+1273.067948955" observedRunningTime="2026-01-29 15:31:50.638892845 +0000 UTC m=+1273.928143082" watchObservedRunningTime="2026-01-29 15:31:50.642366205 +0000 UTC m=+1273.931616442" Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.681799 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh" podStartSLOduration=3.913741617 podStartE2EDuration="1m21.681784267s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.918492846 +0000 UTC m=+1195.207743083" lastFinishedPulling="2026-01-29 15:31:49.686535496 +0000 UTC m=+1272.975785733" observedRunningTime="2026-01-29 15:31:50.677777541 +0000 UTC m=+1273.967027778" watchObservedRunningTime="2026-01-29 15:31:50.681784267 +0000 UTC m=+1273.971034504" Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.706573 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l" podStartSLOduration=3.153156499 podStartE2EDuration="1m21.706558485s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.215021554 +0000 UTC m=+1194.504271791" lastFinishedPulling="2026-01-29 15:31:49.76842353 +0000 UTC m=+1273.057673777" observedRunningTime="2026-01-29 15:31:50.702217279 +0000 UTC m=+1273.991467516" watchObservedRunningTime="2026-01-29 15:31:50.706558485 +0000 UTC m=+1273.995808722" Jan 29 15:31:50 crc kubenswrapper[4757]: I0129 15:31:50.737860 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" podStartSLOduration=80.737842572 podStartE2EDuration="1m20.737842572s" podCreationTimestamp="2026-01-29 15:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:50.735029561 +0000 UTC m=+1274.024279798" watchObservedRunningTime="2026-01-29 15:31:50.737842572 +0000 UTC m=+1274.027092809" Jan 29 15:31:50 crc kubenswrapper[4757]: E0129 15:31:50.989606 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:61e700ea66730db00f31cb2a89fcd49bb919f246027c414e509166c1cab8429c\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" podUID="5590a40a-b378-4912-881d-68b46fb6564d" Jan 29 15:31:52 crc kubenswrapper[4757]: I0129 15:31:52.596689 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr" event={"ID":"1373c007-6220-40ca-a9a7-176d6779ff9e","Type":"ContainerStarted","Data":"eff37f14dcb3f7972e9436e4781f830e996fafed13bbefd37829c25d74bb11df"} Jan 29 15:31:52 crc kubenswrapper[4757]: I0129 15:31:52.599159 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd" event={"ID":"0180bde3-8b8c-4ffe-a5d2-cc39199feb28","Type":"ContainerStarted","Data":"fea84376ab376b41c6aa91d9f98a6ad0ca47f84355d6c29c9388aeba3051972a"} Jan 29 15:31:52 crc kubenswrapper[4757]: I0129 15:31:52.600989 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs" event={"ID":"4ab1a5d0-6fc4-4081-85d6-047635db038e","Type":"ContainerStarted","Data":"4753a4166c44785d791c127d70046cd68f95e82592d67123d4a2215a0b67ec97"} Jan 29 15:31:52 crc kubenswrapper[4757]: I0129 15:31:52.602476 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" event={"ID":"0971e983-bccd-421c-8171-212672e8b8b7","Type":"ContainerStarted","Data":"7870f945c8f4e4fdd4693d52c5b36ca715e79a2c163289fa2ca1da71a72247f0"} Jan 29 15:31:52 crc kubenswrapper[4757]: I0129 15:31:52.602650 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" Jan 29 15:31:52 crc kubenswrapper[4757]: I0129 15:31:52.627090 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" podStartSLOduration=4.506013598 podStartE2EDuration="1m23.62707338s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.932806511 +0000 UTC m=+1195.222056748" lastFinishedPulling="2026-01-29 15:31:51.053866303 +0000 UTC m=+1274.343116530" observedRunningTime="2026-01-29 15:31:52.623701652 +0000 UTC m=+1275.912951889" watchObservedRunningTime="2026-01-29 15:31:52.62707338 +0000 UTC m=+1275.916323617" Jan 29 15:31:53 crc kubenswrapper[4757]: E0129 15:31:53.398216 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks" podUID="2c9cefc6-204f-42c8-b7a6-2c2776617a58" Jan 29 15:31:53 crc kubenswrapper[4757]: I0129 15:31:53.609048 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs" Jan 29 15:31:53 crc kubenswrapper[4757]: I0129 15:31:53.609093 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr" Jan 29 15:31:53 crc kubenswrapper[4757]: I0129 15:31:53.626832 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr" podStartSLOduration=6.758601288 podStartE2EDuration="1m24.626817162s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.910317769 +0000 UTC m=+1195.199568006" lastFinishedPulling="2026-01-29 15:31:49.778533643 +0000 UTC m=+1273.067783880" observedRunningTime="2026-01-29 15:31:53.622784445 +0000 UTC m=+1276.912034692" watchObservedRunningTime="2026-01-29 15:31:53.626817162 +0000 UTC m=+1276.916067399" Jan 29 15:31:53 crc kubenswrapper[4757]: I0129 15:31:53.638073 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs" podStartSLOduration=6.778819915 podStartE2EDuration="1m24.638055828s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.91827582 +0000 UTC m=+1195.207526057" lastFinishedPulling="2026-01-29 15:31:49.777511733 +0000 UTC m=+1273.066761970" observedRunningTime="2026-01-29 15:31:53.637683677 +0000 UTC m=+1276.926933914" watchObservedRunningTime="2026-01-29 15:31:53.638055828 +0000 UTC m=+1276.927306065" Jan 29 15:31:53 crc kubenswrapper[4757]: I0129 15:31:53.658436 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd" podStartSLOduration=6.694532422 podStartE2EDuration="1m24.658416788s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.814790051 +0000 UTC m=+1195.104040288" lastFinishedPulling="2026-01-29 15:31:49.778674407 +0000 UTC m=+1273.067924654" observedRunningTime="2026-01-29 15:31:53.653147105 +0000 UTC m=+1276.942397362" watchObservedRunningTime="2026-01-29 15:31:53.658416788 +0000 UTC m=+1276.947667025" Jan 29 15:31:58 crc kubenswrapper[4757]: I0129 15:31:58.660925 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" event={"ID":"eb034926-25ee-4735-a9c4-407c7cd152a4","Type":"ContainerStarted","Data":"a7c2de21bfef26ae97fc0aaf850e19b57f4b4ad1cd2972fd656274b3a9d44bfa"} Jan 29 15:31:58 crc kubenswrapper[4757]: I0129 15:31:58.663224 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" Jan 29 15:31:58 crc kubenswrapper[4757]: I0129 15:31:58.665568 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" event={"ID":"c7d33f5e-ce62-40e5-9400-c28c1cb50753","Type":"ContainerStarted","Data":"b3f4fd5f1384cbf8ebb0157371a463338c47cb93606c58354f75242e5eeb7d9d"} Jan 29 15:31:58 crc kubenswrapper[4757]: I0129 15:31:58.665898 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" Jan 29 15:31:58 crc kubenswrapper[4757]: I0129 15:31:58.667549 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" event={"ID":"dc96ab98-0882-4c4c-8011-642f5da0ce8d","Type":"ContainerStarted","Data":"458afdf5d73cc100a5ab5e6dc109683f212a1a3721f0661e987ce1276d262389"} Jan 29 15:31:58 crc kubenswrapper[4757]: I0129 15:31:58.667828 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" Jan 29 15:31:58 crc kubenswrapper[4757]: I0129 15:31:58.669767 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" event={"ID":"5d2d32e1-adbe-4b24-bd98-0e51a52283f5","Type":"ContainerStarted","Data":"ad14412c0a7d52bd6b93c393b24f647fd77012fa903a052cbff5dc77a77397d8"} Jan 29 15:31:58 crc kubenswrapper[4757]: I0129 15:31:58.669948 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:31:58 crc kubenswrapper[4757]: I0129 15:31:58.705556 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" podStartSLOduration=4.538276082 podStartE2EDuration="1m29.70553284s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.989173915 +0000 UTC m=+1195.278424152" lastFinishedPulling="2026-01-29 15:31:57.156430663 +0000 UTC m=+1280.445680910" observedRunningTime="2026-01-29 15:31:58.680902106 +0000 UTC m=+1281.970152343" watchObservedRunningTime="2026-01-29 15:31:58.70553284 +0000 UTC m=+1281.994783077" Jan 29 15:31:58 crc kubenswrapper[4757]: I0129 15:31:58.708552 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" podStartSLOduration=4.490675281 podStartE2EDuration="1m29.708539867s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.936361744 +0000 UTC m=+1195.225611981" lastFinishedPulling="2026-01-29 15:31:57.15422632 +0000 UTC m=+1280.443476567" observedRunningTime="2026-01-29 15:31:58.704869371 +0000 UTC m=+1281.994119608" watchObservedRunningTime="2026-01-29 15:31:58.708539867 +0000 UTC m=+1281.997790104" Jan 29 15:31:58 crc kubenswrapper[4757]: I0129 15:31:58.744671 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" podStartSLOduration=80.124260014 podStartE2EDuration="1m29.744651204s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:31:47.543362216 +0000 UTC m=+1270.832612453" lastFinishedPulling="2026-01-29 15:31:57.163753396 +0000 UTC m=+1280.453003643" observedRunningTime="2026-01-29 15:31:58.74379608 +0000 UTC m=+1282.033046327" watchObservedRunningTime="2026-01-29 15:31:58.744651204 +0000 UTC m=+1282.033901441" Jan 29 15:31:58 crc kubenswrapper[4757]: I0129 15:31:58.748830 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" podStartSLOduration=4.590886037 podStartE2EDuration="1m29.748820155s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.997519607 +0000 UTC m=+1195.286769854" lastFinishedPulling="2026-01-29 15:31:57.155453735 +0000 UTC m=+1280.444703972" observedRunningTime="2026-01-29 15:31:58.724520911 +0000 UTC m=+1282.013771148" watchObservedRunningTime="2026-01-29 15:31:58.748820155 +0000 UTC m=+1282.038070392" Jan 29 15:31:59 crc kubenswrapper[4757]: I0129 15:31:59.676386 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" event={"ID":"5297dfef-4739-4076-99f2-462bf83c4b4b","Type":"ContainerStarted","Data":"56dbf2e609bd619202485d93a8456fac1c49f9af7f44acb4b972821e3b8f19f6"} Jan 29 15:31:59 crc kubenswrapper[4757]: I0129 15:31:59.676724 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:31:59 crc kubenswrapper[4757]: I0129 15:31:59.677809 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2" event={"ID":"a5549d49-38a8-4441-8200-6381ddf682b6","Type":"ContainerStarted","Data":"512d4c0a13e0514fdf6d9fec110b446ceecbe5ef4e0dd500c4a3bcb18cda5a76"} Jan 29 15:31:59 crc kubenswrapper[4757]: I0129 15:31:59.710955 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-dd77988f8-h7w6l" Jan 29 15:31:59 crc kubenswrapper[4757]: I0129 15:31:59.733058 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2" podStartSLOduration=3.650768834 podStartE2EDuration="1m30.733036057s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.923569493 +0000 UTC m=+1195.212819730" lastFinishedPulling="2026-01-29 15:31:59.005836706 +0000 UTC m=+1282.295086953" observedRunningTime="2026-01-29 15:31:59.730941686 +0000 UTC m=+1283.020191923" watchObservedRunningTime="2026-01-29 15:31:59.733036057 +0000 UTC m=+1283.022286294" Jan 29 15:31:59 crc kubenswrapper[4757]: I0129 15:31:59.734242 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" podStartSLOduration=75.086121934 podStartE2EDuration="1m30.734231232s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:31:43.357964315 +0000 UTC m=+1266.647214552" lastFinishedPulling="2026-01-29 15:31:59.006073593 +0000 UTC m=+1282.295323850" observedRunningTime="2026-01-29 15:31:59.718283669 +0000 UTC m=+1283.007533916" watchObservedRunningTime="2026-01-29 15:31:59.734231232 +0000 UTC m=+1283.023481469" Jan 29 15:31:59 crc kubenswrapper[4757]: I0129 15:31:59.930695 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-8ccc8547b-jh2fm" Jan 29 15:32:00 crc kubenswrapper[4757]: I0129 15:32:00.079458 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-2kd46" Jan 29 15:32:00 crc kubenswrapper[4757]: I0129 15:32:00.117390 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd" Jan 29 15:32:00 crc kubenswrapper[4757]: I0129 15:32:00.126446 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c7cc6ff45-gpkbd" Jan 29 15:32:00 crc kubenswrapper[4757]: I0129 15:32:00.165089 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2" Jan 29 15:32:00 crc kubenswrapper[4757]: I0129 15:32:00.552314 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-2zgqs" Jan 29 15:32:00 crc kubenswrapper[4757]: I0129 15:32:00.554046 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cf8c44c7-grncr" Jan 29 15:32:00 crc kubenswrapper[4757]: I0129 15:32:00.653325 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-dmc9f" Jan 29 15:32:00 crc kubenswrapper[4757]: I0129 15:32:00.688814 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" event={"ID":"629b88f8-504a-4e19-914a-7359c131deb2","Type":"ContainerStarted","Data":"5a0fa04d104b8038965bd7986674949a608b7d32c88da0e49d52302101f46a6f"} Jan 29 15:32:00 crc kubenswrapper[4757]: I0129 15:32:00.722844 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" podStartSLOduration=3.308805941 podStartE2EDuration="1m31.722821081s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.909810174 +0000 UTC m=+1195.199060411" lastFinishedPulling="2026-01-29 15:32:00.323825324 +0000 UTC m=+1283.613075551" observedRunningTime="2026-01-29 15:32:00.713801759 +0000 UTC m=+1284.003052006" watchObservedRunningTime="2026-01-29 15:32:00.722821081 +0000 UTC m=+1284.012071318" Jan 29 15:32:00 crc kubenswrapper[4757]: I0129 15:32:00.780873 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-59f4c7d7c4-6z2bh" Jan 29 15:32:02 crc kubenswrapper[4757]: I0129 15:32:02.662863 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5cbc58956b-jn7tc" Jan 29 15:32:05 crc kubenswrapper[4757]: I0129 15:32:05.720317 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" event={"ID":"0ae0f41a-2010-4578-a849-a47110a5cad7","Type":"ContainerStarted","Data":"0eb8c6c27e644d5b1e98c843b047fb27e2b2c9b5627d89f2ecb1966573225367"} Jan 29 15:32:05 crc kubenswrapper[4757]: I0129 15:32:05.722442 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" Jan 29 15:32:05 crc kubenswrapper[4757]: I0129 15:32:05.751898 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" podStartSLOduration=2.9520497199999998 podStartE2EDuration="1m36.751879168s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.398537084 +0000 UTC m=+1194.687787321" lastFinishedPulling="2026-01-29 15:32:05.198366532 +0000 UTC m=+1288.487616769" observedRunningTime="2026-01-29 15:32:05.751012793 +0000 UTC m=+1289.040263030" watchObservedRunningTime="2026-01-29 15:32:05.751879168 +0000 UTC m=+1289.041129405" Jan 29 15:32:09 crc kubenswrapper[4757]: I0129 15:32:09.715894 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-f8c4db9df-76jqr" Jan 29 15:32:09 crc kubenswrapper[4757]: I0129 15:32:09.876420 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-866c9d5b98-tbvmq" Jan 29 15:32:09 crc kubenswrapper[4757]: I0129 15:32:09.966491 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" Jan 29 15:32:09 crc kubenswrapper[4757]: I0129 15:32:09.971605 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-79f547bdd5-7bg8k" Jan 29 15:32:10 crc kubenswrapper[4757]: I0129 15:32:10.167680 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-68cb478976-5rfk2" Jan 29 15:32:10 crc kubenswrapper[4757]: I0129 15:32:10.439919 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-82wnl" Jan 29 15:32:11 crc kubenswrapper[4757]: I0129 15:32:11.624159 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-qzz5n" Jan 29 15:32:11 crc kubenswrapper[4757]: I0129 15:32:11.759384 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" event={"ID":"5590a40a-b378-4912-881d-68b46fb6564d","Type":"ContainerStarted","Data":"61d59cbead4357b3497cbfae1228e5203a4f954e4c1635609e1b29039ba97e59"} Jan 29 15:32:11 crc kubenswrapper[4757]: I0129 15:32:11.760147 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" Jan 29 15:32:11 crc kubenswrapper[4757]: I0129 15:32:11.761397 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks" event={"ID":"2c9cefc6-204f-42c8-b7a6-2c2776617a58","Type":"ContainerStarted","Data":"7f73e0aa73676296fd2ce2697c32535a50f029002f10c0dd8ffc22024492f7ca"} Jan 29 15:32:11 crc kubenswrapper[4757]: I0129 15:32:11.780275 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" podStartSLOduration=4.240580342 podStartE2EDuration="1m42.780245657s" podCreationTimestamp="2026-01-29 15:30:29 +0000 UTC" firstStartedPulling="2026-01-29 15:30:32.002486371 +0000 UTC m=+1195.291736608" lastFinishedPulling="2026-01-29 15:32:10.542151686 +0000 UTC m=+1293.831401923" observedRunningTime="2026-01-29 15:32:11.778563459 +0000 UTC m=+1295.067813706" watchObservedRunningTime="2026-01-29 15:32:11.780245657 +0000 UTC m=+1295.069495904" Jan 29 15:32:11 crc kubenswrapper[4757]: I0129 15:32:11.797947 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdwks" podStartSLOduration=2.452424315 podStartE2EDuration="1m41.79792729s" podCreationTimestamp="2026-01-29 15:30:30 +0000 UTC" firstStartedPulling="2026-01-29 15:30:31.996649452 +0000 UTC m=+1195.285899689" lastFinishedPulling="2026-01-29 15:32:11.342152427 +0000 UTC m=+1294.631402664" observedRunningTime="2026-01-29 15:32:11.792170903 +0000 UTC m=+1295.081421140" watchObservedRunningTime="2026-01-29 15:32:11.79792729 +0000 UTC m=+1295.087177527" Jan 29 15:32:12 crc kubenswrapper[4757]: I0129 15:32:12.214250 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj" Jan 29 15:32:17 crc kubenswrapper[4757]: I0129 15:32:17.605725 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:32:17 crc kubenswrapper[4757]: I0129 15:32:17.606084 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:32:19 crc kubenswrapper[4757]: I0129 15:32:19.754541 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-d8b84fbc-qrdfv" Jan 29 15:32:20 crc kubenswrapper[4757]: I0129 15:32:20.216108 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68f8cb846c-kng6x" Jan 29 15:32:47 crc kubenswrapper[4757]: I0129 15:32:47.604756 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:32:47 crc kubenswrapper[4757]: I0129 15:32:47.605315 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:33:17 crc kubenswrapper[4757]: I0129 15:33:17.604906 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:33:17 crc kubenswrapper[4757]: I0129 15:33:17.605568 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:33:17 crc kubenswrapper[4757]: I0129 15:33:17.605623 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:33:17 crc kubenswrapper[4757]: I0129 15:33:17.606515 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9184ceb222e3aaf913deba2b1b97656dc4b2da7e3588e7cc528958150153f8ad"} pod="openshift-machine-config-operator/machine-config-daemon-45q8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:33:17 crc kubenswrapper[4757]: I0129 15:33:17.606579 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" containerID="cri-o://9184ceb222e3aaf913deba2b1b97656dc4b2da7e3588e7cc528958150153f8ad" gracePeriod=600 Jan 29 15:33:18 crc kubenswrapper[4757]: I0129 15:33:18.030335 4757 generic.go:334] "Generic (PLEG): container finished" podID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerID="9184ceb222e3aaf913deba2b1b97656dc4b2da7e3588e7cc528958150153f8ad" exitCode=0 Jan 29 15:33:18 crc kubenswrapper[4757]: I0129 15:33:18.030395 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerDied","Data":"9184ceb222e3aaf913deba2b1b97656dc4b2da7e3588e7cc528958150153f8ad"} Jan 29 15:33:18 crc kubenswrapper[4757]: I0129 15:33:18.030502 4757 scope.go:117] "RemoveContainer" containerID="26224c213349170284329c384d8b105e9ad831590acee9b01765c926f542d25f" Jan 29 15:33:19 crc kubenswrapper[4757]: I0129 15:33:19.038959 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d"} Jan 29 15:35:47 crc kubenswrapper[4757]: I0129 15:35:47.604880 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:35:47 crc kubenswrapper[4757]: I0129 15:35:47.606187 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:36:17 crc kubenswrapper[4757]: I0129 15:36:17.604727 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:36:17 crc kubenswrapper[4757]: I0129 15:36:17.606554 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:36:47 crc kubenswrapper[4757]: I0129 15:36:47.604765 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:36:47 crc kubenswrapper[4757]: I0129 15:36:47.605293 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:36:47 crc kubenswrapper[4757]: I0129 15:36:47.605353 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:36:47 crc kubenswrapper[4757]: I0129 15:36:47.605985 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d"} pod="openshift-machine-config-operator/machine-config-daemon-45q8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:36:47 crc kubenswrapper[4757]: I0129 15:36:47.606033 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" containerID="cri-o://ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" gracePeriod=600 Jan 29 15:36:49 crc kubenswrapper[4757]: E0129 15:36:49.307003 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:36:49 crc kubenswrapper[4757]: I0129 15:36:49.636184 4757 generic.go:334] "Generic (PLEG): container finished" podID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" exitCode=0 Jan 29 15:36:49 crc kubenswrapper[4757]: I0129 15:36:49.636231 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerDied","Data":"ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d"} Jan 29 15:36:49 crc kubenswrapper[4757]: I0129 15:36:49.636293 4757 scope.go:117] "RemoveContainer" containerID="9184ceb222e3aaf913deba2b1b97656dc4b2da7e3588e7cc528958150153f8ad" Jan 29 15:36:49 crc kubenswrapper[4757]: I0129 15:36:49.636815 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:36:49 crc kubenswrapper[4757]: E0129 15:36:49.637160 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:37:01 crc kubenswrapper[4757]: I0129 15:37:01.396383 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:37:01 crc kubenswrapper[4757]: E0129 15:37:01.397185 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.140548 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nhkjh"] Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.142506 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.154058 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nhkjh"] Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.288625 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e61450-a810-43a4-9a25-e0f2837e056a-utilities\") pod \"redhat-operators-nhkjh\" (UID: \"d7e61450-a810-43a4-9a25-e0f2837e056a\") " pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.288680 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e61450-a810-43a4-9a25-e0f2837e056a-catalog-content\") pod \"redhat-operators-nhkjh\" (UID: \"d7e61450-a810-43a4-9a25-e0f2837e056a\") " pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.288723 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftrbq\" (UniqueName: \"kubernetes.io/projected/d7e61450-a810-43a4-9a25-e0f2837e056a-kube-api-access-ftrbq\") pod \"redhat-operators-nhkjh\" (UID: \"d7e61450-a810-43a4-9a25-e0f2837e056a\") " pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.390251 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e61450-a810-43a4-9a25-e0f2837e056a-utilities\") pod \"redhat-operators-nhkjh\" (UID: \"d7e61450-a810-43a4-9a25-e0f2837e056a\") " pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.390677 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e61450-a810-43a4-9a25-e0f2837e056a-catalog-content\") pod \"redhat-operators-nhkjh\" (UID: \"d7e61450-a810-43a4-9a25-e0f2837e056a\") " pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.390988 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e61450-a810-43a4-9a25-e0f2837e056a-catalog-content\") pod \"redhat-operators-nhkjh\" (UID: \"d7e61450-a810-43a4-9a25-e0f2837e056a\") " pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.391079 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e61450-a810-43a4-9a25-e0f2837e056a-utilities\") pod \"redhat-operators-nhkjh\" (UID: \"d7e61450-a810-43a4-9a25-e0f2837e056a\") " pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.391126 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftrbq\" (UniqueName: \"kubernetes.io/projected/d7e61450-a810-43a4-9a25-e0f2837e056a-kube-api-access-ftrbq\") pod \"redhat-operators-nhkjh\" (UID: \"d7e61450-a810-43a4-9a25-e0f2837e056a\") " pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.416420 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftrbq\" (UniqueName: \"kubernetes.io/projected/d7e61450-a810-43a4-9a25-e0f2837e056a-kube-api-access-ftrbq\") pod \"redhat-operators-nhkjh\" (UID: \"d7e61450-a810-43a4-9a25-e0f2837e056a\") " pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.462301 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:08 crc kubenswrapper[4757]: I0129 15:37:08.964641 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nhkjh"] Jan 29 15:37:09 crc kubenswrapper[4757]: I0129 15:37:09.791093 4757 generic.go:334] "Generic (PLEG): container finished" podID="d7e61450-a810-43a4-9a25-e0f2837e056a" containerID="5dcf742f5a36f9ac4ce674872c5ae8b7988572cd6551b24cf79b4c689c9b8783" exitCode=0 Jan 29 15:37:09 crc kubenswrapper[4757]: I0129 15:37:09.791186 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhkjh" event={"ID":"d7e61450-a810-43a4-9a25-e0f2837e056a","Type":"ContainerDied","Data":"5dcf742f5a36f9ac4ce674872c5ae8b7988572cd6551b24cf79b4c689c9b8783"} Jan 29 15:37:09 crc kubenswrapper[4757]: I0129 15:37:09.791449 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhkjh" event={"ID":"d7e61450-a810-43a4-9a25-e0f2837e056a","Type":"ContainerStarted","Data":"01053b4d82f0bdbbd122ce67d2ada1ce75d97f037cf76be6cd8407a2a79b82c5"} Jan 29 15:37:09 crc kubenswrapper[4757]: I0129 15:37:09.793195 4757 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:37:10 crc kubenswrapper[4757]: I0129 15:37:10.805713 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhkjh" event={"ID":"d7e61450-a810-43a4-9a25-e0f2837e056a","Type":"ContainerStarted","Data":"d3a3f9940e59a84366a38dbfaf85d8e321625afe22020de38736c7bba7854639"} Jan 29 15:37:15 crc kubenswrapper[4757]: I0129 15:37:15.397370 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:37:15 crc kubenswrapper[4757]: E0129 15:37:15.398054 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:37:16 crc kubenswrapper[4757]: I0129 15:37:16.854783 4757 generic.go:334] "Generic (PLEG): container finished" podID="d7e61450-a810-43a4-9a25-e0f2837e056a" containerID="d3a3f9940e59a84366a38dbfaf85d8e321625afe22020de38736c7bba7854639" exitCode=0 Jan 29 15:37:16 crc kubenswrapper[4757]: I0129 15:37:16.854804 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhkjh" event={"ID":"d7e61450-a810-43a4-9a25-e0f2837e056a","Type":"ContainerDied","Data":"d3a3f9940e59a84366a38dbfaf85d8e321625afe22020de38736c7bba7854639"} Jan 29 15:37:17 crc kubenswrapper[4757]: I0129 15:37:17.863962 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhkjh" event={"ID":"d7e61450-a810-43a4-9a25-e0f2837e056a","Type":"ContainerStarted","Data":"f7bf691ca4f60ddd3248684e1a37d215171abfe957de30fc55f469aa49be857f"} Jan 29 15:37:17 crc kubenswrapper[4757]: I0129 15:37:17.882656 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nhkjh" podStartSLOduration=2.341154212 podStartE2EDuration="9.882639236s" podCreationTimestamp="2026-01-29 15:37:08 +0000 UTC" firstStartedPulling="2026-01-29 15:37:09.792957942 +0000 UTC m=+1593.082208189" lastFinishedPulling="2026-01-29 15:37:17.334442976 +0000 UTC m=+1600.623693213" observedRunningTime="2026-01-29 15:37:17.880911626 +0000 UTC m=+1601.170161863" watchObservedRunningTime="2026-01-29 15:37:17.882639236 +0000 UTC m=+1601.171889483" Jan 29 15:37:18 crc kubenswrapper[4757]: I0129 15:37:18.463816 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:18 crc kubenswrapper[4757]: I0129 15:37:18.463878 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:19 crc kubenswrapper[4757]: I0129 15:37:19.507453 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nhkjh" podUID="d7e61450-a810-43a4-9a25-e0f2837e056a" containerName="registry-server" probeResult="failure" output=< Jan 29 15:37:19 crc kubenswrapper[4757]: timeout: failed to connect service ":50051" within 1s Jan 29 15:37:19 crc kubenswrapper[4757]: > Jan 29 15:37:28 crc kubenswrapper[4757]: I0129 15:37:28.519724 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:28 crc kubenswrapper[4757]: I0129 15:37:28.585713 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:28 crc kubenswrapper[4757]: I0129 15:37:28.768211 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nhkjh"] Jan 29 15:37:29 crc kubenswrapper[4757]: I0129 15:37:29.397070 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:37:29 crc kubenswrapper[4757]: E0129 15:37:29.397411 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:37:29 crc kubenswrapper[4757]: I0129 15:37:29.946847 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nhkjh" podUID="d7e61450-a810-43a4-9a25-e0f2837e056a" containerName="registry-server" containerID="cri-o://f7bf691ca4f60ddd3248684e1a37d215171abfe957de30fc55f469aa49be857f" gracePeriod=2 Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.314678 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.403345 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e61450-a810-43a4-9a25-e0f2837e056a-utilities\") pod \"d7e61450-a810-43a4-9a25-e0f2837e056a\" (UID: \"d7e61450-a810-43a4-9a25-e0f2837e056a\") " Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.403394 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftrbq\" (UniqueName: \"kubernetes.io/projected/d7e61450-a810-43a4-9a25-e0f2837e056a-kube-api-access-ftrbq\") pod \"d7e61450-a810-43a4-9a25-e0f2837e056a\" (UID: \"d7e61450-a810-43a4-9a25-e0f2837e056a\") " Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.403414 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e61450-a810-43a4-9a25-e0f2837e056a-catalog-content\") pod \"d7e61450-a810-43a4-9a25-e0f2837e056a\" (UID: \"d7e61450-a810-43a4-9a25-e0f2837e056a\") " Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.404527 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e61450-a810-43a4-9a25-e0f2837e056a-utilities" (OuterVolumeSpecName: "utilities") pod "d7e61450-a810-43a4-9a25-e0f2837e056a" (UID: "d7e61450-a810-43a4-9a25-e0f2837e056a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.408590 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e61450-a810-43a4-9a25-e0f2837e056a-kube-api-access-ftrbq" (OuterVolumeSpecName: "kube-api-access-ftrbq") pod "d7e61450-a810-43a4-9a25-e0f2837e056a" (UID: "d7e61450-a810-43a4-9a25-e0f2837e056a"). InnerVolumeSpecName "kube-api-access-ftrbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.505564 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftrbq\" (UniqueName: \"kubernetes.io/projected/d7e61450-a810-43a4-9a25-e0f2837e056a-kube-api-access-ftrbq\") on node \"crc\" DevicePath \"\"" Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.505624 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e61450-a810-43a4-9a25-e0f2837e056a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.524672 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e61450-a810-43a4-9a25-e0f2837e056a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7e61450-a810-43a4-9a25-e0f2837e056a" (UID: "d7e61450-a810-43a4-9a25-e0f2837e056a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.607335 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e61450-a810-43a4-9a25-e0f2837e056a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.955030 4757 generic.go:334] "Generic (PLEG): container finished" podID="d7e61450-a810-43a4-9a25-e0f2837e056a" containerID="f7bf691ca4f60ddd3248684e1a37d215171abfe957de30fc55f469aa49be857f" exitCode=0 Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.955093 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhkjh" event={"ID":"d7e61450-a810-43a4-9a25-e0f2837e056a","Type":"ContainerDied","Data":"f7bf691ca4f60ddd3248684e1a37d215171abfe957de30fc55f469aa49be857f"} Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.955122 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhkjh" event={"ID":"d7e61450-a810-43a4-9a25-e0f2837e056a","Type":"ContainerDied","Data":"01053b4d82f0bdbbd122ce67d2ada1ce75d97f037cf76be6cd8407a2a79b82c5"} Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.955141 4757 scope.go:117] "RemoveContainer" containerID="f7bf691ca4f60ddd3248684e1a37d215171abfe957de30fc55f469aa49be857f" Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.955322 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhkjh" Jan 29 15:37:30 crc kubenswrapper[4757]: I0129 15:37:30.973080 4757 scope.go:117] "RemoveContainer" containerID="d3a3f9940e59a84366a38dbfaf85d8e321625afe22020de38736c7bba7854639" Jan 29 15:37:31 crc kubenswrapper[4757]: I0129 15:37:31.001343 4757 scope.go:117] "RemoveContainer" containerID="5dcf742f5a36f9ac4ce674872c5ae8b7988572cd6551b24cf79b4c689c9b8783" Jan 29 15:37:31 crc kubenswrapper[4757]: I0129 15:37:31.018317 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nhkjh"] Jan 29 15:37:31 crc kubenswrapper[4757]: I0129 15:37:31.031649 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nhkjh"] Jan 29 15:37:31 crc kubenswrapper[4757]: I0129 15:37:31.050903 4757 scope.go:117] "RemoveContainer" containerID="f7bf691ca4f60ddd3248684e1a37d215171abfe957de30fc55f469aa49be857f" Jan 29 15:37:31 crc kubenswrapper[4757]: E0129 15:37:31.052553 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7bf691ca4f60ddd3248684e1a37d215171abfe957de30fc55f469aa49be857f\": container with ID starting with f7bf691ca4f60ddd3248684e1a37d215171abfe957de30fc55f469aa49be857f not found: ID does not exist" containerID="f7bf691ca4f60ddd3248684e1a37d215171abfe957de30fc55f469aa49be857f" Jan 29 15:37:31 crc kubenswrapper[4757]: I0129 15:37:31.052591 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7bf691ca4f60ddd3248684e1a37d215171abfe957de30fc55f469aa49be857f"} err="failed to get container status \"f7bf691ca4f60ddd3248684e1a37d215171abfe957de30fc55f469aa49be857f\": rpc error: code = NotFound desc = could not find container \"f7bf691ca4f60ddd3248684e1a37d215171abfe957de30fc55f469aa49be857f\": container with ID starting with f7bf691ca4f60ddd3248684e1a37d215171abfe957de30fc55f469aa49be857f not found: ID does not exist" Jan 29 15:37:31 crc kubenswrapper[4757]: I0129 15:37:31.052616 4757 scope.go:117] "RemoveContainer" containerID="d3a3f9940e59a84366a38dbfaf85d8e321625afe22020de38736c7bba7854639" Jan 29 15:37:31 crc kubenswrapper[4757]: E0129 15:37:31.054650 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a3f9940e59a84366a38dbfaf85d8e321625afe22020de38736c7bba7854639\": container with ID starting with d3a3f9940e59a84366a38dbfaf85d8e321625afe22020de38736c7bba7854639 not found: ID does not exist" containerID="d3a3f9940e59a84366a38dbfaf85d8e321625afe22020de38736c7bba7854639" Jan 29 15:37:31 crc kubenswrapper[4757]: I0129 15:37:31.054690 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a3f9940e59a84366a38dbfaf85d8e321625afe22020de38736c7bba7854639"} err="failed to get container status \"d3a3f9940e59a84366a38dbfaf85d8e321625afe22020de38736c7bba7854639\": rpc error: code = NotFound desc = could not find container \"d3a3f9940e59a84366a38dbfaf85d8e321625afe22020de38736c7bba7854639\": container with ID starting with d3a3f9940e59a84366a38dbfaf85d8e321625afe22020de38736c7bba7854639 not found: ID does not exist" Jan 29 15:37:31 crc kubenswrapper[4757]: I0129 15:37:31.054717 4757 scope.go:117] "RemoveContainer" containerID="5dcf742f5a36f9ac4ce674872c5ae8b7988572cd6551b24cf79b4c689c9b8783" Jan 29 15:37:31 crc kubenswrapper[4757]: E0129 15:37:31.058411 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dcf742f5a36f9ac4ce674872c5ae8b7988572cd6551b24cf79b4c689c9b8783\": container with ID starting with 5dcf742f5a36f9ac4ce674872c5ae8b7988572cd6551b24cf79b4c689c9b8783 not found: ID does not exist" containerID="5dcf742f5a36f9ac4ce674872c5ae8b7988572cd6551b24cf79b4c689c9b8783" Jan 29 15:37:31 crc kubenswrapper[4757]: I0129 15:37:31.058460 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dcf742f5a36f9ac4ce674872c5ae8b7988572cd6551b24cf79b4c689c9b8783"} err="failed to get container status \"5dcf742f5a36f9ac4ce674872c5ae8b7988572cd6551b24cf79b4c689c9b8783\": rpc error: code = NotFound desc = could not find container \"5dcf742f5a36f9ac4ce674872c5ae8b7988572cd6551b24cf79b4c689c9b8783\": container with ID starting with 5dcf742f5a36f9ac4ce674872c5ae8b7988572cd6551b24cf79b4c689c9b8783 not found: ID does not exist" Jan 29 15:37:31 crc kubenswrapper[4757]: I0129 15:37:31.404257 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e61450-a810-43a4-9a25-e0f2837e056a" path="/var/lib/kubelet/pods/d7e61450-a810-43a4-9a25-e0f2837e056a/volumes" Jan 29 15:37:44 crc kubenswrapper[4757]: I0129 15:37:44.396304 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:37:44 crc kubenswrapper[4757]: E0129 15:37:44.396997 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:37:55 crc kubenswrapper[4757]: I0129 15:37:55.395923 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:37:55 crc kubenswrapper[4757]: E0129 15:37:55.396605 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:38:06 crc kubenswrapper[4757]: I0129 15:38:06.397470 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:38:06 crc kubenswrapper[4757]: E0129 15:38:06.398408 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:38:20 crc kubenswrapper[4757]: I0129 15:38:20.396816 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:38:20 crc kubenswrapper[4757]: E0129 15:38:20.397546 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:38:31 crc kubenswrapper[4757]: I0129 15:38:31.397082 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:38:31 crc kubenswrapper[4757]: E0129 15:38:31.398177 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:38:42 crc kubenswrapper[4757]: I0129 15:38:42.396952 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:38:42 crc kubenswrapper[4757]: E0129 15:38:42.397846 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.196663 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gnq6g"] Jan 29 15:38:48 crc kubenswrapper[4757]: E0129 15:38:48.197299 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e61450-a810-43a4-9a25-e0f2837e056a" containerName="extract-utilities" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.197315 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e61450-a810-43a4-9a25-e0f2837e056a" containerName="extract-utilities" Jan 29 15:38:48 crc kubenswrapper[4757]: E0129 15:38:48.197335 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e61450-a810-43a4-9a25-e0f2837e056a" containerName="registry-server" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.197344 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e61450-a810-43a4-9a25-e0f2837e056a" containerName="registry-server" Jan 29 15:38:48 crc kubenswrapper[4757]: E0129 15:38:48.197364 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e61450-a810-43a4-9a25-e0f2837e056a" containerName="extract-content" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.197372 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e61450-a810-43a4-9a25-e0f2837e056a" containerName="extract-content" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.197532 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e61450-a810-43a4-9a25-e0f2837e056a" containerName="registry-server" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.198684 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.207234 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gnq6g"] Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.384527 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqwsd\" (UniqueName: \"kubernetes.io/projected/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-kube-api-access-nqwsd\") pod \"certified-operators-gnq6g\" (UID: \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\") " pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.384586 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-catalog-content\") pod \"certified-operators-gnq6g\" (UID: \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\") " pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.384711 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-utilities\") pod \"certified-operators-gnq6g\" (UID: \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\") " pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.486287 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-utilities\") pod \"certified-operators-gnq6g\" (UID: \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\") " pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.486351 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqwsd\" (UniqueName: \"kubernetes.io/projected/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-kube-api-access-nqwsd\") pod \"certified-operators-gnq6g\" (UID: \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\") " pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.486374 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-catalog-content\") pod \"certified-operators-gnq6g\" (UID: \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\") " pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.487329 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-catalog-content\") pod \"certified-operators-gnq6g\" (UID: \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\") " pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.487362 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-utilities\") pod \"certified-operators-gnq6g\" (UID: \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\") " pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.510111 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqwsd\" (UniqueName: \"kubernetes.io/projected/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-kube-api-access-nqwsd\") pod \"certified-operators-gnq6g\" (UID: \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\") " pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.523802 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:48 crc kubenswrapper[4757]: I0129 15:38:48.995162 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gnq6g"] Jan 29 15:38:49 crc kubenswrapper[4757]: I0129 15:38:49.494100 4757 generic.go:334] "Generic (PLEG): container finished" podID="446f5146-8fc3-4552-b5c3-61b1f25ff0b9" containerID="05302dbef259b9f2b8ff0807119d291564d4ea98f10a40b17fad0f34aee46550" exitCode=0 Jan 29 15:38:49 crc kubenswrapper[4757]: I0129 15:38:49.494176 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnq6g" event={"ID":"446f5146-8fc3-4552-b5c3-61b1f25ff0b9","Type":"ContainerDied","Data":"05302dbef259b9f2b8ff0807119d291564d4ea98f10a40b17fad0f34aee46550"} Jan 29 15:38:49 crc kubenswrapper[4757]: I0129 15:38:49.494436 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnq6g" event={"ID":"446f5146-8fc3-4552-b5c3-61b1f25ff0b9","Type":"ContainerStarted","Data":"b35c33922ef66b045c9937353e3c4fa48de6f84bfe9fc32fac287c9aa3fe54a0"} Jan 29 15:38:50 crc kubenswrapper[4757]: I0129 15:38:50.501808 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnq6g" event={"ID":"446f5146-8fc3-4552-b5c3-61b1f25ff0b9","Type":"ContainerStarted","Data":"5813febe790f1b9909faff2371c9c0e342107c1f477b8304a250beb4a52196f9"} Jan 29 15:38:51 crc kubenswrapper[4757]: I0129 15:38:51.509179 4757 generic.go:334] "Generic (PLEG): container finished" podID="446f5146-8fc3-4552-b5c3-61b1f25ff0b9" containerID="5813febe790f1b9909faff2371c9c0e342107c1f477b8304a250beb4a52196f9" exitCode=0 Jan 29 15:38:51 crc kubenswrapper[4757]: I0129 15:38:51.509224 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnq6g" event={"ID":"446f5146-8fc3-4552-b5c3-61b1f25ff0b9","Type":"ContainerDied","Data":"5813febe790f1b9909faff2371c9c0e342107c1f477b8304a250beb4a52196f9"} Jan 29 15:38:52 crc kubenswrapper[4757]: I0129 15:38:52.516440 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnq6g" event={"ID":"446f5146-8fc3-4552-b5c3-61b1f25ff0b9","Type":"ContainerStarted","Data":"4797b32630b9045f052d81467229db83349ef66faf83a020501b0b59ded85d74"} Jan 29 15:38:52 crc kubenswrapper[4757]: I0129 15:38:52.548849 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gnq6g" podStartSLOduration=2.081381693 podStartE2EDuration="4.548811505s" podCreationTimestamp="2026-01-29 15:38:48 +0000 UTC" firstStartedPulling="2026-01-29 15:38:49.496456876 +0000 UTC m=+1692.785707113" lastFinishedPulling="2026-01-29 15:38:51.963886698 +0000 UTC m=+1695.253136925" observedRunningTime="2026-01-29 15:38:52.538907995 +0000 UTC m=+1695.828158242" watchObservedRunningTime="2026-01-29 15:38:52.548811505 +0000 UTC m=+1695.838061752" Jan 29 15:38:54 crc kubenswrapper[4757]: I0129 15:38:54.396263 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:38:54 crc kubenswrapper[4757]: E0129 15:38:54.396803 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:38:58 crc kubenswrapper[4757]: I0129 15:38:58.524452 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:58 crc kubenswrapper[4757]: I0129 15:38:58.525158 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:58 crc kubenswrapper[4757]: I0129 15:38:58.576733 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:59 crc kubenswrapper[4757]: I0129 15:38:59.617042 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:38:59 crc kubenswrapper[4757]: I0129 15:38:59.673979 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gnq6g"] Jan 29 15:39:01 crc kubenswrapper[4757]: I0129 15:39:01.586843 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gnq6g" podUID="446f5146-8fc3-4552-b5c3-61b1f25ff0b9" containerName="registry-server" containerID="cri-o://4797b32630b9045f052d81467229db83349ef66faf83a020501b0b59ded85d74" gracePeriod=2 Jan 29 15:39:02 crc kubenswrapper[4757]: I0129 15:39:02.593015 4757 generic.go:334] "Generic (PLEG): container finished" podID="446f5146-8fc3-4552-b5c3-61b1f25ff0b9" containerID="4797b32630b9045f052d81467229db83349ef66faf83a020501b0b59ded85d74" exitCode=0 Jan 29 15:39:02 crc kubenswrapper[4757]: I0129 15:39:02.593319 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnq6g" event={"ID":"446f5146-8fc3-4552-b5c3-61b1f25ff0b9","Type":"ContainerDied","Data":"4797b32630b9045f052d81467229db83349ef66faf83a020501b0b59ded85d74"} Jan 29 15:39:02 crc kubenswrapper[4757]: I0129 15:39:02.653675 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:39:02 crc kubenswrapper[4757]: I0129 15:39:02.802908 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqwsd\" (UniqueName: \"kubernetes.io/projected/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-kube-api-access-nqwsd\") pod \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\" (UID: \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\") " Jan 29 15:39:02 crc kubenswrapper[4757]: I0129 15:39:02.802997 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-utilities\") pod \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\" (UID: \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\") " Jan 29 15:39:02 crc kubenswrapper[4757]: I0129 15:39:02.803101 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-catalog-content\") pod \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\" (UID: \"446f5146-8fc3-4552-b5c3-61b1f25ff0b9\") " Jan 29 15:39:02 crc kubenswrapper[4757]: I0129 15:39:02.804114 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-utilities" (OuterVolumeSpecName: "utilities") pod "446f5146-8fc3-4552-b5c3-61b1f25ff0b9" (UID: "446f5146-8fc3-4552-b5c3-61b1f25ff0b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:39:02 crc kubenswrapper[4757]: I0129 15:39:02.812839 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-kube-api-access-nqwsd" (OuterVolumeSpecName: "kube-api-access-nqwsd") pod "446f5146-8fc3-4552-b5c3-61b1f25ff0b9" (UID: "446f5146-8fc3-4552-b5c3-61b1f25ff0b9"). InnerVolumeSpecName "kube-api-access-nqwsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:39:02 crc kubenswrapper[4757]: I0129 15:39:02.863910 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "446f5146-8fc3-4552-b5c3-61b1f25ff0b9" (UID: "446f5146-8fc3-4552-b5c3-61b1f25ff0b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:39:02 crc kubenswrapper[4757]: I0129 15:39:02.904336 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:39:02 crc kubenswrapper[4757]: I0129 15:39:02.904379 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqwsd\" (UniqueName: \"kubernetes.io/projected/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-kube-api-access-nqwsd\") on node \"crc\" DevicePath \"\"" Jan 29 15:39:02 crc kubenswrapper[4757]: I0129 15:39:02.904394 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/446f5146-8fc3-4552-b5c3-61b1f25ff0b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:39:03 crc kubenswrapper[4757]: I0129 15:39:03.602447 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnq6g" event={"ID":"446f5146-8fc3-4552-b5c3-61b1f25ff0b9","Type":"ContainerDied","Data":"b35c33922ef66b045c9937353e3c4fa48de6f84bfe9fc32fac287c9aa3fe54a0"} Jan 29 15:39:03 crc kubenswrapper[4757]: I0129 15:39:03.602505 4757 scope.go:117] "RemoveContainer" containerID="4797b32630b9045f052d81467229db83349ef66faf83a020501b0b59ded85d74" Jan 29 15:39:03 crc kubenswrapper[4757]: I0129 15:39:03.602625 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gnq6g" Jan 29 15:39:03 crc kubenswrapper[4757]: I0129 15:39:03.627131 4757 scope.go:117] "RemoveContainer" containerID="5813febe790f1b9909faff2371c9c0e342107c1f477b8304a250beb4a52196f9" Jan 29 15:39:03 crc kubenswrapper[4757]: I0129 15:39:03.636545 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gnq6g"] Jan 29 15:39:03 crc kubenswrapper[4757]: I0129 15:39:03.653915 4757 scope.go:117] "RemoveContainer" containerID="05302dbef259b9f2b8ff0807119d291564d4ea98f10a40b17fad0f34aee46550" Jan 29 15:39:03 crc kubenswrapper[4757]: I0129 15:39:03.663659 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gnq6g"] Jan 29 15:39:05 crc kubenswrapper[4757]: I0129 15:39:05.410781 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446f5146-8fc3-4552-b5c3-61b1f25ff0b9" path="/var/lib/kubelet/pods/446f5146-8fc3-4552-b5c3-61b1f25ff0b9/volumes" Jan 29 15:39:06 crc kubenswrapper[4757]: I0129 15:39:06.396732 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:39:06 crc kubenswrapper[4757]: E0129 15:39:06.397039 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:39:18 crc kubenswrapper[4757]: I0129 15:39:18.396237 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:39:18 crc kubenswrapper[4757]: E0129 15:39:18.398391 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.397020 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:39:29 crc kubenswrapper[4757]: E0129 15:39:29.397916 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.470711 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qb6d8"] Jan 29 15:39:29 crc kubenswrapper[4757]: E0129 15:39:29.471032 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446f5146-8fc3-4552-b5c3-61b1f25ff0b9" containerName="extract-utilities" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.471052 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="446f5146-8fc3-4552-b5c3-61b1f25ff0b9" containerName="extract-utilities" Jan 29 15:39:29 crc kubenswrapper[4757]: E0129 15:39:29.471077 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446f5146-8fc3-4552-b5c3-61b1f25ff0b9" containerName="extract-content" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.471088 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="446f5146-8fc3-4552-b5c3-61b1f25ff0b9" containerName="extract-content" Jan 29 15:39:29 crc kubenswrapper[4757]: E0129 15:39:29.471103 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446f5146-8fc3-4552-b5c3-61b1f25ff0b9" containerName="registry-server" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.471111 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="446f5146-8fc3-4552-b5c3-61b1f25ff0b9" containerName="registry-server" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.471333 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="446f5146-8fc3-4552-b5c3-61b1f25ff0b9" containerName="registry-server" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.472433 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.485726 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb6d8"] Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.576590 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/290df532-7425-40e4-9b41-9c02555f63ae-catalog-content\") pod \"redhat-marketplace-qb6d8\" (UID: \"290df532-7425-40e4-9b41-9c02555f63ae\") " pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.576654 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/290df532-7425-40e4-9b41-9c02555f63ae-utilities\") pod \"redhat-marketplace-qb6d8\" (UID: \"290df532-7425-40e4-9b41-9c02555f63ae\") " pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.577348 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv9c2\" (UniqueName: \"kubernetes.io/projected/290df532-7425-40e4-9b41-9c02555f63ae-kube-api-access-qv9c2\") pod \"redhat-marketplace-qb6d8\" (UID: \"290df532-7425-40e4-9b41-9c02555f63ae\") " pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.678971 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv9c2\" (UniqueName: \"kubernetes.io/projected/290df532-7425-40e4-9b41-9c02555f63ae-kube-api-access-qv9c2\") pod \"redhat-marketplace-qb6d8\" (UID: \"290df532-7425-40e4-9b41-9c02555f63ae\") " pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.679019 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/290df532-7425-40e4-9b41-9c02555f63ae-catalog-content\") pod \"redhat-marketplace-qb6d8\" (UID: \"290df532-7425-40e4-9b41-9c02555f63ae\") " pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.679044 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/290df532-7425-40e4-9b41-9c02555f63ae-utilities\") pod \"redhat-marketplace-qb6d8\" (UID: \"290df532-7425-40e4-9b41-9c02555f63ae\") " pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.679459 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/290df532-7425-40e4-9b41-9c02555f63ae-utilities\") pod \"redhat-marketplace-qb6d8\" (UID: \"290df532-7425-40e4-9b41-9c02555f63ae\") " pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.679496 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/290df532-7425-40e4-9b41-9c02555f63ae-catalog-content\") pod \"redhat-marketplace-qb6d8\" (UID: \"290df532-7425-40e4-9b41-9c02555f63ae\") " pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.695993 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv9c2\" (UniqueName: \"kubernetes.io/projected/290df532-7425-40e4-9b41-9c02555f63ae-kube-api-access-qv9c2\") pod \"redhat-marketplace-qb6d8\" (UID: \"290df532-7425-40e4-9b41-9c02555f63ae\") " pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:29 crc kubenswrapper[4757]: I0129 15:39:29.793484 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:30 crc kubenswrapper[4757]: I0129 15:39:30.070222 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb6d8"] Jan 29 15:39:30 crc kubenswrapper[4757]: I0129 15:39:30.811099 4757 generic.go:334] "Generic (PLEG): container finished" podID="290df532-7425-40e4-9b41-9c02555f63ae" containerID="c2fda5190c1e7d58686e022a41efff09eaed5ec7a7d60da8e8d10135e209c5a1" exitCode=0 Jan 29 15:39:30 crc kubenswrapper[4757]: I0129 15:39:30.811455 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb6d8" event={"ID":"290df532-7425-40e4-9b41-9c02555f63ae","Type":"ContainerDied","Data":"c2fda5190c1e7d58686e022a41efff09eaed5ec7a7d60da8e8d10135e209c5a1"} Jan 29 15:39:30 crc kubenswrapper[4757]: I0129 15:39:30.811485 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb6d8" event={"ID":"290df532-7425-40e4-9b41-9c02555f63ae","Type":"ContainerStarted","Data":"57b2b3f83ce266960c9c00ef3fd221a85f5e08dd544af31f3caee433f1354abe"} Jan 29 15:39:32 crc kubenswrapper[4757]: I0129 15:39:32.830451 4757 generic.go:334] "Generic (PLEG): container finished" podID="290df532-7425-40e4-9b41-9c02555f63ae" containerID="52807a03d16b901e8ab36c6cd9e54946a5a5a79d023badfb6dbc7a54d5638e9b" exitCode=0 Jan 29 15:39:32 crc kubenswrapper[4757]: I0129 15:39:32.830558 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb6d8" event={"ID":"290df532-7425-40e4-9b41-9c02555f63ae","Type":"ContainerDied","Data":"52807a03d16b901e8ab36c6cd9e54946a5a5a79d023badfb6dbc7a54d5638e9b"} Jan 29 15:39:33 crc kubenswrapper[4757]: I0129 15:39:33.840821 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb6d8" event={"ID":"290df532-7425-40e4-9b41-9c02555f63ae","Type":"ContainerStarted","Data":"e23cbbe890736cf96dd517cb11a89be3530b86d8b87dbf0531890a7050e400a1"} Jan 29 15:39:33 crc kubenswrapper[4757]: I0129 15:39:33.861003 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qb6d8" podStartSLOduration=2.211127138 podStartE2EDuration="4.860985054s" podCreationTimestamp="2026-01-29 15:39:29 +0000 UTC" firstStartedPulling="2026-01-29 15:39:30.812643309 +0000 UTC m=+1734.101893546" lastFinishedPulling="2026-01-29 15:39:33.462501215 +0000 UTC m=+1736.751751462" observedRunningTime="2026-01-29 15:39:33.857884766 +0000 UTC m=+1737.147135023" watchObservedRunningTime="2026-01-29 15:39:33.860985054 +0000 UTC m=+1737.150235301" Jan 29 15:39:39 crc kubenswrapper[4757]: I0129 15:39:39.795151 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:39 crc kubenswrapper[4757]: I0129 15:39:39.795565 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:39 crc kubenswrapper[4757]: I0129 15:39:39.847698 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:39 crc kubenswrapper[4757]: I0129 15:39:39.939500 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:40 crc kubenswrapper[4757]: I0129 15:39:40.082919 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb6d8"] Jan 29 15:39:41 crc kubenswrapper[4757]: I0129 15:39:41.901481 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qb6d8" podUID="290df532-7425-40e4-9b41-9c02555f63ae" containerName="registry-server" containerID="cri-o://e23cbbe890736cf96dd517cb11a89be3530b86d8b87dbf0531890a7050e400a1" gracePeriod=2 Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.282754 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.476021 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/290df532-7425-40e4-9b41-9c02555f63ae-catalog-content\") pod \"290df532-7425-40e4-9b41-9c02555f63ae\" (UID: \"290df532-7425-40e4-9b41-9c02555f63ae\") " Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.476069 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/290df532-7425-40e4-9b41-9c02555f63ae-utilities\") pod \"290df532-7425-40e4-9b41-9c02555f63ae\" (UID: \"290df532-7425-40e4-9b41-9c02555f63ae\") " Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.476117 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv9c2\" (UniqueName: \"kubernetes.io/projected/290df532-7425-40e4-9b41-9c02555f63ae-kube-api-access-qv9c2\") pod \"290df532-7425-40e4-9b41-9c02555f63ae\" (UID: \"290df532-7425-40e4-9b41-9c02555f63ae\") " Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.477186 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/290df532-7425-40e4-9b41-9c02555f63ae-utilities" (OuterVolumeSpecName: "utilities") pod "290df532-7425-40e4-9b41-9c02555f63ae" (UID: "290df532-7425-40e4-9b41-9c02555f63ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.482649 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/290df532-7425-40e4-9b41-9c02555f63ae-kube-api-access-qv9c2" (OuterVolumeSpecName: "kube-api-access-qv9c2") pod "290df532-7425-40e4-9b41-9c02555f63ae" (UID: "290df532-7425-40e4-9b41-9c02555f63ae"). InnerVolumeSpecName "kube-api-access-qv9c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.578489 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/290df532-7425-40e4-9b41-9c02555f63ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.578521 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv9c2\" (UniqueName: \"kubernetes.io/projected/290df532-7425-40e4-9b41-9c02555f63ae-kube-api-access-qv9c2\") on node \"crc\" DevicePath \"\"" Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.922568 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/290df532-7425-40e4-9b41-9c02555f63ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "290df532-7425-40e4-9b41-9c02555f63ae" (UID: "290df532-7425-40e4-9b41-9c02555f63ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.933891 4757 generic.go:334] "Generic (PLEG): container finished" podID="290df532-7425-40e4-9b41-9c02555f63ae" containerID="e23cbbe890736cf96dd517cb11a89be3530b86d8b87dbf0531890a7050e400a1" exitCode=0 Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.933941 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb6d8" event={"ID":"290df532-7425-40e4-9b41-9c02555f63ae","Type":"ContainerDied","Data":"e23cbbe890736cf96dd517cb11a89be3530b86d8b87dbf0531890a7050e400a1"} Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.933969 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb6d8" event={"ID":"290df532-7425-40e4-9b41-9c02555f63ae","Type":"ContainerDied","Data":"57b2b3f83ce266960c9c00ef3fd221a85f5e08dd544af31f3caee433f1354abe"} Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.933991 4757 scope.go:117] "RemoveContainer" containerID="e23cbbe890736cf96dd517cb11a89be3530b86d8b87dbf0531890a7050e400a1" Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.934011 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb6d8" Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.960073 4757 scope.go:117] "RemoveContainer" containerID="52807a03d16b901e8ab36c6cd9e54946a5a5a79d023badfb6dbc7a54d5638e9b" Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.977437 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb6d8"] Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.984189 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/290df532-7425-40e4-9b41-9c02555f63ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.988075 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb6d8"] Jan 29 15:39:42 crc kubenswrapper[4757]: I0129 15:39:42.995861 4757 scope.go:117] "RemoveContainer" containerID="c2fda5190c1e7d58686e022a41efff09eaed5ec7a7d60da8e8d10135e209c5a1" Jan 29 15:39:43 crc kubenswrapper[4757]: I0129 15:39:43.009594 4757 scope.go:117] "RemoveContainer" containerID="e23cbbe890736cf96dd517cb11a89be3530b86d8b87dbf0531890a7050e400a1" Jan 29 15:39:43 crc kubenswrapper[4757]: E0129 15:39:43.013590 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23cbbe890736cf96dd517cb11a89be3530b86d8b87dbf0531890a7050e400a1\": container with ID starting with e23cbbe890736cf96dd517cb11a89be3530b86d8b87dbf0531890a7050e400a1 not found: ID does not exist" containerID="e23cbbe890736cf96dd517cb11a89be3530b86d8b87dbf0531890a7050e400a1" Jan 29 15:39:43 crc kubenswrapper[4757]: I0129 15:39:43.013641 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23cbbe890736cf96dd517cb11a89be3530b86d8b87dbf0531890a7050e400a1"} err="failed to get container status \"e23cbbe890736cf96dd517cb11a89be3530b86d8b87dbf0531890a7050e400a1\": rpc error: code = NotFound desc = could not find container \"e23cbbe890736cf96dd517cb11a89be3530b86d8b87dbf0531890a7050e400a1\": container with ID starting with e23cbbe890736cf96dd517cb11a89be3530b86d8b87dbf0531890a7050e400a1 not found: ID does not exist" Jan 29 15:39:43 crc kubenswrapper[4757]: I0129 15:39:43.013674 4757 scope.go:117] "RemoveContainer" containerID="52807a03d16b901e8ab36c6cd9e54946a5a5a79d023badfb6dbc7a54d5638e9b" Jan 29 15:39:43 crc kubenswrapper[4757]: E0129 15:39:43.015617 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52807a03d16b901e8ab36c6cd9e54946a5a5a79d023badfb6dbc7a54d5638e9b\": container with ID starting with 52807a03d16b901e8ab36c6cd9e54946a5a5a79d023badfb6dbc7a54d5638e9b not found: ID does not exist" containerID="52807a03d16b901e8ab36c6cd9e54946a5a5a79d023badfb6dbc7a54d5638e9b" Jan 29 15:39:43 crc kubenswrapper[4757]: I0129 15:39:43.015677 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52807a03d16b901e8ab36c6cd9e54946a5a5a79d023badfb6dbc7a54d5638e9b"} err="failed to get container status \"52807a03d16b901e8ab36c6cd9e54946a5a5a79d023badfb6dbc7a54d5638e9b\": rpc error: code = NotFound desc = could not find container \"52807a03d16b901e8ab36c6cd9e54946a5a5a79d023badfb6dbc7a54d5638e9b\": container with ID starting with 52807a03d16b901e8ab36c6cd9e54946a5a5a79d023badfb6dbc7a54d5638e9b not found: ID does not exist" Jan 29 15:39:43 crc kubenswrapper[4757]: I0129 15:39:43.015712 4757 scope.go:117] "RemoveContainer" containerID="c2fda5190c1e7d58686e022a41efff09eaed5ec7a7d60da8e8d10135e209c5a1" Jan 29 15:39:43 crc kubenswrapper[4757]: E0129 15:39:43.017137 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2fda5190c1e7d58686e022a41efff09eaed5ec7a7d60da8e8d10135e209c5a1\": container with ID starting with c2fda5190c1e7d58686e022a41efff09eaed5ec7a7d60da8e8d10135e209c5a1 not found: ID does not exist" containerID="c2fda5190c1e7d58686e022a41efff09eaed5ec7a7d60da8e8d10135e209c5a1" Jan 29 15:39:43 crc kubenswrapper[4757]: I0129 15:39:43.017165 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2fda5190c1e7d58686e022a41efff09eaed5ec7a7d60da8e8d10135e209c5a1"} err="failed to get container status \"c2fda5190c1e7d58686e022a41efff09eaed5ec7a7d60da8e8d10135e209c5a1\": rpc error: code = NotFound desc = could not find container \"c2fda5190c1e7d58686e022a41efff09eaed5ec7a7d60da8e8d10135e209c5a1\": container with ID starting with c2fda5190c1e7d58686e022a41efff09eaed5ec7a7d60da8e8d10135e209c5a1 not found: ID does not exist" Jan 29 15:39:43 crc kubenswrapper[4757]: I0129 15:39:43.396036 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:39:43 crc kubenswrapper[4757]: E0129 15:39:43.396482 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:39:43 crc kubenswrapper[4757]: I0129 15:39:43.404740 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="290df532-7425-40e4-9b41-9c02555f63ae" path="/var/lib/kubelet/pods/290df532-7425-40e4-9b41-9c02555f63ae/volumes" Jan 29 15:39:56 crc kubenswrapper[4757]: I0129 15:39:56.395607 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:39:56 crc kubenswrapper[4757]: E0129 15:39:56.396358 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:40:11 crc kubenswrapper[4757]: I0129 15:40:11.397133 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:40:11 crc kubenswrapper[4757]: E0129 15:40:11.398475 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:40:22 crc kubenswrapper[4757]: I0129 15:40:22.396217 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:40:22 crc kubenswrapper[4757]: E0129 15:40:22.397126 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:40:37 crc kubenswrapper[4757]: I0129 15:40:37.415051 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:40:37 crc kubenswrapper[4757]: E0129 15:40:37.417099 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:40:49 crc kubenswrapper[4757]: I0129 15:40:49.396359 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:40:49 crc kubenswrapper[4757]: E0129 15:40:49.397110 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:41:01 crc kubenswrapper[4757]: I0129 15:41:01.396977 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:41:01 crc kubenswrapper[4757]: E0129 15:41:01.398187 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:41:14 crc kubenswrapper[4757]: I0129 15:41:14.395763 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:41:14 crc kubenswrapper[4757]: E0129 15:41:14.396507 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:41:26 crc kubenswrapper[4757]: I0129 15:41:26.403356 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:41:26 crc kubenswrapper[4757]: E0129 15:41:26.404193 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:41:37 crc kubenswrapper[4757]: I0129 15:41:37.401715 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:41:37 crc kubenswrapper[4757]: E0129 15:41:37.402745 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:41:40 crc kubenswrapper[4757]: I0129 15:41:40.931171 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mkwdx"] Jan 29 15:41:40 crc kubenswrapper[4757]: E0129 15:41:40.933446 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290df532-7425-40e4-9b41-9c02555f63ae" containerName="extract-utilities" Jan 29 15:41:40 crc kubenswrapper[4757]: I0129 15:41:40.933629 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="290df532-7425-40e4-9b41-9c02555f63ae" containerName="extract-utilities" Jan 29 15:41:40 crc kubenswrapper[4757]: E0129 15:41:40.934512 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290df532-7425-40e4-9b41-9c02555f63ae" containerName="extract-content" Jan 29 15:41:40 crc kubenswrapper[4757]: I0129 15:41:40.934711 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="290df532-7425-40e4-9b41-9c02555f63ae" containerName="extract-content" Jan 29 15:41:40 crc kubenswrapper[4757]: E0129 15:41:40.934933 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290df532-7425-40e4-9b41-9c02555f63ae" containerName="registry-server" Jan 29 15:41:40 crc kubenswrapper[4757]: I0129 15:41:40.935119 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="290df532-7425-40e4-9b41-9c02555f63ae" containerName="registry-server" Jan 29 15:41:40 crc kubenswrapper[4757]: I0129 15:41:40.935667 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="290df532-7425-40e4-9b41-9c02555f63ae" containerName="registry-server" Jan 29 15:41:40 crc kubenswrapper[4757]: I0129 15:41:40.939816 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:40 crc kubenswrapper[4757]: I0129 15:41:40.943209 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkwdx"] Jan 29 15:41:41 crc kubenswrapper[4757]: I0129 15:41:41.006761 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b28facb-f10c-45f7-8e36-b6a96aa5471c-catalog-content\") pod \"community-operators-mkwdx\" (UID: \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\") " pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:41 crc kubenswrapper[4757]: I0129 15:41:41.006834 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b28facb-f10c-45f7-8e36-b6a96aa5471c-utilities\") pod \"community-operators-mkwdx\" (UID: \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\") " pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:41 crc kubenswrapper[4757]: I0129 15:41:41.006888 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6l9j\" (UniqueName: \"kubernetes.io/projected/0b28facb-f10c-45f7-8e36-b6a96aa5471c-kube-api-access-m6l9j\") pod \"community-operators-mkwdx\" (UID: \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\") " pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:41 crc kubenswrapper[4757]: I0129 15:41:41.108592 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b28facb-f10c-45f7-8e36-b6a96aa5471c-utilities\") pod \"community-operators-mkwdx\" (UID: \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\") " pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:41 crc kubenswrapper[4757]: I0129 15:41:41.108667 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6l9j\" (UniqueName: \"kubernetes.io/projected/0b28facb-f10c-45f7-8e36-b6a96aa5471c-kube-api-access-m6l9j\") pod \"community-operators-mkwdx\" (UID: \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\") " pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:41 crc kubenswrapper[4757]: I0129 15:41:41.108747 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b28facb-f10c-45f7-8e36-b6a96aa5471c-catalog-content\") pod \"community-operators-mkwdx\" (UID: \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\") " pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:41 crc kubenswrapper[4757]: I0129 15:41:41.109189 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b28facb-f10c-45f7-8e36-b6a96aa5471c-utilities\") pod \"community-operators-mkwdx\" (UID: \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\") " pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:41 crc kubenswrapper[4757]: I0129 15:41:41.109223 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b28facb-f10c-45f7-8e36-b6a96aa5471c-catalog-content\") pod \"community-operators-mkwdx\" (UID: \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\") " pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:41 crc kubenswrapper[4757]: I0129 15:41:41.127424 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6l9j\" (UniqueName: \"kubernetes.io/projected/0b28facb-f10c-45f7-8e36-b6a96aa5471c-kube-api-access-m6l9j\") pod \"community-operators-mkwdx\" (UID: \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\") " pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:41 crc kubenswrapper[4757]: I0129 15:41:41.259829 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:41 crc kubenswrapper[4757]: I0129 15:41:41.847507 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkwdx"] Jan 29 15:41:41 crc kubenswrapper[4757]: W0129 15:41:41.855261 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b28facb_f10c_45f7_8e36_b6a96aa5471c.slice/crio-7fbf7d2a1c0184bc9061a73f5e2b1322861c7954574eec6ffe4c568a087358c7 WatchSource:0}: Error finding container 7fbf7d2a1c0184bc9061a73f5e2b1322861c7954574eec6ffe4c568a087358c7: Status 404 returned error can't find the container with id 7fbf7d2a1c0184bc9061a73f5e2b1322861c7954574eec6ffe4c568a087358c7 Jan 29 15:41:42 crc kubenswrapper[4757]: I0129 15:41:42.864792 4757 generic.go:334] "Generic (PLEG): container finished" podID="0b28facb-f10c-45f7-8e36-b6a96aa5471c" containerID="e4b52ce52e8568491b084bed01e5191d2d5d6c751bd54e697e1b2c5ba0c97da1" exitCode=0 Jan 29 15:41:42 crc kubenswrapper[4757]: I0129 15:41:42.864840 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkwdx" event={"ID":"0b28facb-f10c-45f7-8e36-b6a96aa5471c","Type":"ContainerDied","Data":"e4b52ce52e8568491b084bed01e5191d2d5d6c751bd54e697e1b2c5ba0c97da1"} Jan 29 15:41:42 crc kubenswrapper[4757]: I0129 15:41:42.864894 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkwdx" event={"ID":"0b28facb-f10c-45f7-8e36-b6a96aa5471c","Type":"ContainerStarted","Data":"7fbf7d2a1c0184bc9061a73f5e2b1322861c7954574eec6ffe4c568a087358c7"} Jan 29 15:41:44 crc kubenswrapper[4757]: I0129 15:41:44.878970 4757 generic.go:334] "Generic (PLEG): container finished" podID="0b28facb-f10c-45f7-8e36-b6a96aa5471c" containerID="e99f9d1c88fe448a27bb004fecd2e696bd4db4e36a388151404053fe26686f6b" exitCode=0 Jan 29 15:41:44 crc kubenswrapper[4757]: I0129 15:41:44.879184 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkwdx" event={"ID":"0b28facb-f10c-45f7-8e36-b6a96aa5471c","Type":"ContainerDied","Data":"e99f9d1c88fe448a27bb004fecd2e696bd4db4e36a388151404053fe26686f6b"} Jan 29 15:41:45 crc kubenswrapper[4757]: I0129 15:41:45.888430 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkwdx" event={"ID":"0b28facb-f10c-45f7-8e36-b6a96aa5471c","Type":"ContainerStarted","Data":"0fcd14c172f590fabb580a02f4ee9d19feb27032191201d875d01eee409a5645"} Jan 29 15:41:45 crc kubenswrapper[4757]: I0129 15:41:45.912747 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mkwdx" podStartSLOduration=3.498899337 podStartE2EDuration="5.912727577s" podCreationTimestamp="2026-01-29 15:41:40 +0000 UTC" firstStartedPulling="2026-01-29 15:41:42.86707146 +0000 UTC m=+1866.156321707" lastFinishedPulling="2026-01-29 15:41:45.28089969 +0000 UTC m=+1868.570149947" observedRunningTime="2026-01-29 15:41:45.905507719 +0000 UTC m=+1869.194757976" watchObservedRunningTime="2026-01-29 15:41:45.912727577 +0000 UTC m=+1869.201977824" Jan 29 15:41:51 crc kubenswrapper[4757]: I0129 15:41:51.260792 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:51 crc kubenswrapper[4757]: I0129 15:41:51.261343 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:51 crc kubenswrapper[4757]: I0129 15:41:51.304194 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:51 crc kubenswrapper[4757]: I0129 15:41:51.965229 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:52 crc kubenswrapper[4757]: I0129 15:41:52.011107 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mkwdx"] Jan 29 15:41:52 crc kubenswrapper[4757]: I0129 15:41:52.396672 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:41:52 crc kubenswrapper[4757]: I0129 15:41:52.938586 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"60fa87b617c1542c879897bee41087b09b00b7c22cd079c2dbce29eda0b6c165"} Jan 29 15:41:53 crc kubenswrapper[4757]: I0129 15:41:53.943798 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mkwdx" podUID="0b28facb-f10c-45f7-8e36-b6a96aa5471c" containerName="registry-server" containerID="cri-o://0fcd14c172f590fabb580a02f4ee9d19feb27032191201d875d01eee409a5645" gracePeriod=2 Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.366415 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.486914 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6l9j\" (UniqueName: \"kubernetes.io/projected/0b28facb-f10c-45f7-8e36-b6a96aa5471c-kube-api-access-m6l9j\") pod \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\" (UID: \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\") " Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.487092 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b28facb-f10c-45f7-8e36-b6a96aa5471c-utilities\") pod \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\" (UID: \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\") " Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.487123 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b28facb-f10c-45f7-8e36-b6a96aa5471c-catalog-content\") pod \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\" (UID: \"0b28facb-f10c-45f7-8e36-b6a96aa5471c\") " Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.492426 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b28facb-f10c-45f7-8e36-b6a96aa5471c-utilities" (OuterVolumeSpecName: "utilities") pod "0b28facb-f10c-45f7-8e36-b6a96aa5471c" (UID: "0b28facb-f10c-45f7-8e36-b6a96aa5471c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.511901 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b28facb-f10c-45f7-8e36-b6a96aa5471c-kube-api-access-m6l9j" (OuterVolumeSpecName: "kube-api-access-m6l9j") pod "0b28facb-f10c-45f7-8e36-b6a96aa5471c" (UID: "0b28facb-f10c-45f7-8e36-b6a96aa5471c"). InnerVolumeSpecName "kube-api-access-m6l9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.589786 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b28facb-f10c-45f7-8e36-b6a96aa5471c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.590082 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6l9j\" (UniqueName: \"kubernetes.io/projected/0b28facb-f10c-45f7-8e36-b6a96aa5471c-kube-api-access-m6l9j\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.597616 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b28facb-f10c-45f7-8e36-b6a96aa5471c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b28facb-f10c-45f7-8e36-b6a96aa5471c" (UID: "0b28facb-f10c-45f7-8e36-b6a96aa5471c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.691853 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b28facb-f10c-45f7-8e36-b6a96aa5471c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.952406 4757 generic.go:334] "Generic (PLEG): container finished" podID="0b28facb-f10c-45f7-8e36-b6a96aa5471c" containerID="0fcd14c172f590fabb580a02f4ee9d19feb27032191201d875d01eee409a5645" exitCode=0 Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.952446 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkwdx" event={"ID":"0b28facb-f10c-45f7-8e36-b6a96aa5471c","Type":"ContainerDied","Data":"0fcd14c172f590fabb580a02f4ee9d19feb27032191201d875d01eee409a5645"} Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.952486 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkwdx" event={"ID":"0b28facb-f10c-45f7-8e36-b6a96aa5471c","Type":"ContainerDied","Data":"7fbf7d2a1c0184bc9061a73f5e2b1322861c7954574eec6ffe4c568a087358c7"} Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.952509 4757 scope.go:117] "RemoveContainer" containerID="0fcd14c172f590fabb580a02f4ee9d19feb27032191201d875d01eee409a5645" Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.952448 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkwdx" Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.977241 4757 scope.go:117] "RemoveContainer" containerID="e99f9d1c88fe448a27bb004fecd2e696bd4db4e36a388151404053fe26686f6b" Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.981912 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mkwdx"] Jan 29 15:41:54 crc kubenswrapper[4757]: I0129 15:41:54.994843 4757 scope.go:117] "RemoveContainer" containerID="e4b52ce52e8568491b084bed01e5191d2d5d6c751bd54e697e1b2c5ba0c97da1" Jan 29 15:41:55 crc kubenswrapper[4757]: I0129 15:41:55.002506 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mkwdx"] Jan 29 15:41:55 crc kubenswrapper[4757]: I0129 15:41:55.028260 4757 scope.go:117] "RemoveContainer" containerID="0fcd14c172f590fabb580a02f4ee9d19feb27032191201d875d01eee409a5645" Jan 29 15:41:55 crc kubenswrapper[4757]: E0129 15:41:55.028783 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fcd14c172f590fabb580a02f4ee9d19feb27032191201d875d01eee409a5645\": container with ID starting with 0fcd14c172f590fabb580a02f4ee9d19feb27032191201d875d01eee409a5645 not found: ID does not exist" containerID="0fcd14c172f590fabb580a02f4ee9d19feb27032191201d875d01eee409a5645" Jan 29 15:41:55 crc kubenswrapper[4757]: I0129 15:41:55.028850 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fcd14c172f590fabb580a02f4ee9d19feb27032191201d875d01eee409a5645"} err="failed to get container status \"0fcd14c172f590fabb580a02f4ee9d19feb27032191201d875d01eee409a5645\": rpc error: code = NotFound desc = could not find container \"0fcd14c172f590fabb580a02f4ee9d19feb27032191201d875d01eee409a5645\": container with ID starting with 0fcd14c172f590fabb580a02f4ee9d19feb27032191201d875d01eee409a5645 not found: ID does not exist" Jan 29 15:41:55 crc kubenswrapper[4757]: I0129 15:41:55.028876 4757 scope.go:117] "RemoveContainer" containerID="e99f9d1c88fe448a27bb004fecd2e696bd4db4e36a388151404053fe26686f6b" Jan 29 15:41:55 crc kubenswrapper[4757]: E0129 15:41:55.029399 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e99f9d1c88fe448a27bb004fecd2e696bd4db4e36a388151404053fe26686f6b\": container with ID starting with e99f9d1c88fe448a27bb004fecd2e696bd4db4e36a388151404053fe26686f6b not found: ID does not exist" containerID="e99f9d1c88fe448a27bb004fecd2e696bd4db4e36a388151404053fe26686f6b" Jan 29 15:41:55 crc kubenswrapper[4757]: I0129 15:41:55.029546 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e99f9d1c88fe448a27bb004fecd2e696bd4db4e36a388151404053fe26686f6b"} err="failed to get container status \"e99f9d1c88fe448a27bb004fecd2e696bd4db4e36a388151404053fe26686f6b\": rpc error: code = NotFound desc = could not find container \"e99f9d1c88fe448a27bb004fecd2e696bd4db4e36a388151404053fe26686f6b\": container with ID starting with e99f9d1c88fe448a27bb004fecd2e696bd4db4e36a388151404053fe26686f6b not found: ID does not exist" Jan 29 15:41:55 crc kubenswrapper[4757]: I0129 15:41:55.029702 4757 scope.go:117] "RemoveContainer" containerID="e4b52ce52e8568491b084bed01e5191d2d5d6c751bd54e697e1b2c5ba0c97da1" Jan 29 15:41:55 crc kubenswrapper[4757]: E0129 15:41:55.030140 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4b52ce52e8568491b084bed01e5191d2d5d6c751bd54e697e1b2c5ba0c97da1\": container with ID starting with e4b52ce52e8568491b084bed01e5191d2d5d6c751bd54e697e1b2c5ba0c97da1 not found: ID does not exist" containerID="e4b52ce52e8568491b084bed01e5191d2d5d6c751bd54e697e1b2c5ba0c97da1" Jan 29 15:41:55 crc kubenswrapper[4757]: I0129 15:41:55.030169 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4b52ce52e8568491b084bed01e5191d2d5d6c751bd54e697e1b2c5ba0c97da1"} err="failed to get container status \"e4b52ce52e8568491b084bed01e5191d2d5d6c751bd54e697e1b2c5ba0c97da1\": rpc error: code = NotFound desc = could not find container \"e4b52ce52e8568491b084bed01e5191d2d5d6c751bd54e697e1b2c5ba0c97da1\": container with ID starting with e4b52ce52e8568491b084bed01e5191d2d5d6c751bd54e697e1b2c5ba0c97da1 not found: ID does not exist" Jan 29 15:41:55 crc kubenswrapper[4757]: I0129 15:41:55.403997 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b28facb-f10c-45f7-8e36-b6a96aa5471c" path="/var/lib/kubelet/pods/0b28facb-f10c-45f7-8e36-b6a96aa5471c/volumes" Jan 29 15:44:17 crc kubenswrapper[4757]: I0129 15:44:17.604959 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:44:17 crc kubenswrapper[4757]: I0129 15:44:17.605556 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:44:47 crc kubenswrapper[4757]: I0129 15:44:47.605392 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:44:47 crc kubenswrapper[4757]: I0129 15:44:47.605986 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.161601 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt"] Jan 29 15:45:00 crc kubenswrapper[4757]: E0129 15:45:00.162688 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b28facb-f10c-45f7-8e36-b6a96aa5471c" containerName="extract-utilities" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.162711 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b28facb-f10c-45f7-8e36-b6a96aa5471c" containerName="extract-utilities" Jan 29 15:45:00 crc kubenswrapper[4757]: E0129 15:45:00.162748 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b28facb-f10c-45f7-8e36-b6a96aa5471c" containerName="registry-server" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.162760 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b28facb-f10c-45f7-8e36-b6a96aa5471c" containerName="registry-server" Jan 29 15:45:00 crc kubenswrapper[4757]: E0129 15:45:00.162781 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b28facb-f10c-45f7-8e36-b6a96aa5471c" containerName="extract-content" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.162793 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b28facb-f10c-45f7-8e36-b6a96aa5471c" containerName="extract-content" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.163038 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b28facb-f10c-45f7-8e36-b6a96aa5471c" containerName="registry-server" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.163827 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.167795 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.168487 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.182922 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt"] Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.290562 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-secret-volume\") pod \"collect-profiles-29495025-9lzpt\" (UID: \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.290618 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-864mq\" (UniqueName: \"kubernetes.io/projected/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-kube-api-access-864mq\") pod \"collect-profiles-29495025-9lzpt\" (UID: \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.290684 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-config-volume\") pod \"collect-profiles-29495025-9lzpt\" (UID: \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.391858 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-secret-volume\") pod \"collect-profiles-29495025-9lzpt\" (UID: \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.391927 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-864mq\" (UniqueName: \"kubernetes.io/projected/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-kube-api-access-864mq\") pod \"collect-profiles-29495025-9lzpt\" (UID: \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.391992 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-config-volume\") pod \"collect-profiles-29495025-9lzpt\" (UID: \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.393084 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-config-volume\") pod \"collect-profiles-29495025-9lzpt\" (UID: \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.398203 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-secret-volume\") pod \"collect-profiles-29495025-9lzpt\" (UID: \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.413877 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-864mq\" (UniqueName: \"kubernetes.io/projected/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-kube-api-access-864mq\") pod \"collect-profiles-29495025-9lzpt\" (UID: \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.489565 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" Jan 29 15:45:00 crc kubenswrapper[4757]: I0129 15:45:00.916420 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt"] Jan 29 15:45:01 crc kubenswrapper[4757]: I0129 15:45:01.491205 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" event={"ID":"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32","Type":"ContainerDied","Data":"7505966eae7696799f08e0b572dbe2758f2895ae8c3197256db09806560b255a"} Jan 29 15:45:01 crc kubenswrapper[4757]: I0129 15:45:01.491053 4757 generic.go:334] "Generic (PLEG): container finished" podID="82c00fc1-c9d8-4bdd-93af-a56ffe57cd32" containerID="7505966eae7696799f08e0b572dbe2758f2895ae8c3197256db09806560b255a" exitCode=0 Jan 29 15:45:01 crc kubenswrapper[4757]: I0129 15:45:01.491573 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" event={"ID":"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32","Type":"ContainerStarted","Data":"dd39be9cafecc881f5749f46afe372eda7b8fc90100498727613d25f21d73871"} Jan 29 15:45:02 crc kubenswrapper[4757]: I0129 15:45:02.792910 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" Jan 29 15:45:02 crc kubenswrapper[4757]: I0129 15:45:02.928067 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-config-volume\") pod \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\" (UID: \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\") " Jan 29 15:45:02 crc kubenswrapper[4757]: I0129 15:45:02.928160 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-864mq\" (UniqueName: \"kubernetes.io/projected/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-kube-api-access-864mq\") pod \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\" (UID: \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\") " Jan 29 15:45:02 crc kubenswrapper[4757]: I0129 15:45:02.928194 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-secret-volume\") pod \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\" (UID: \"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32\") " Jan 29 15:45:02 crc kubenswrapper[4757]: I0129 15:45:02.929013 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-config-volume" (OuterVolumeSpecName: "config-volume") pod "82c00fc1-c9d8-4bdd-93af-a56ffe57cd32" (UID: "82c00fc1-c9d8-4bdd-93af-a56ffe57cd32"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:45:02 crc kubenswrapper[4757]: I0129 15:45:02.933383 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "82c00fc1-c9d8-4bdd-93af-a56ffe57cd32" (UID: "82c00fc1-c9d8-4bdd-93af-a56ffe57cd32"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:45:02 crc kubenswrapper[4757]: I0129 15:45:02.933664 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-kube-api-access-864mq" (OuterVolumeSpecName: "kube-api-access-864mq") pod "82c00fc1-c9d8-4bdd-93af-a56ffe57cd32" (UID: "82c00fc1-c9d8-4bdd-93af-a56ffe57cd32"). InnerVolumeSpecName "kube-api-access-864mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:45:03 crc kubenswrapper[4757]: I0129 15:45:03.030176 4757 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:03 crc kubenswrapper[4757]: I0129 15:45:03.030218 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-864mq\" (UniqueName: \"kubernetes.io/projected/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-kube-api-access-864mq\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:03 crc kubenswrapper[4757]: I0129 15:45:03.030230 4757 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/82c00fc1-c9d8-4bdd-93af-a56ffe57cd32-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:03 crc kubenswrapper[4757]: I0129 15:45:03.505472 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" event={"ID":"82c00fc1-c9d8-4bdd-93af-a56ffe57cd32","Type":"ContainerDied","Data":"dd39be9cafecc881f5749f46afe372eda7b8fc90100498727613d25f21d73871"} Jan 29 15:45:03 crc kubenswrapper[4757]: I0129 15:45:03.505512 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-9lzpt" Jan 29 15:45:03 crc kubenswrapper[4757]: I0129 15:45:03.505519 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd39be9cafecc881f5749f46afe372eda7b8fc90100498727613d25f21d73871" Jan 29 15:45:03 crc kubenswrapper[4757]: I0129 15:45:03.873565 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww"] Jan 29 15:45:03 crc kubenswrapper[4757]: I0129 15:45:03.893040 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494980-9zrww"] Jan 29 15:45:05 crc kubenswrapper[4757]: I0129 15:45:05.405598 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8548b94-9099-42d5-914d-c2c10561bc5a" path="/var/lib/kubelet/pods/c8548b94-9099-42d5-914d-c2c10561bc5a/volumes" Jan 29 15:45:17 crc kubenswrapper[4757]: I0129 15:45:17.604358 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:45:17 crc kubenswrapper[4757]: I0129 15:45:17.604871 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:45:17 crc kubenswrapper[4757]: I0129 15:45:17.604917 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:45:17 crc kubenswrapper[4757]: I0129 15:45:17.605603 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"60fa87b617c1542c879897bee41087b09b00b7c22cd079c2dbce29eda0b6c165"} pod="openshift-machine-config-operator/machine-config-daemon-45q8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:45:17 crc kubenswrapper[4757]: I0129 15:45:17.605679 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" containerID="cri-o://60fa87b617c1542c879897bee41087b09b00b7c22cd079c2dbce29eda0b6c165" gracePeriod=600 Jan 29 15:45:17 crc kubenswrapper[4757]: E0129 15:45:17.785392 4757 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf453676a_fbf0_4159_8a5a_04c0138b42c1.slice/crio-60fa87b617c1542c879897bee41087b09b00b7c22cd079c2dbce29eda0b6c165.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:45:18 crc kubenswrapper[4757]: I0129 15:45:18.605718 4757 generic.go:334] "Generic (PLEG): container finished" podID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerID="60fa87b617c1542c879897bee41087b09b00b7c22cd079c2dbce29eda0b6c165" exitCode=0 Jan 29 15:45:18 crc kubenswrapper[4757]: I0129 15:45:18.605857 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerDied","Data":"60fa87b617c1542c879897bee41087b09b00b7c22cd079c2dbce29eda0b6c165"} Jan 29 15:45:18 crc kubenswrapper[4757]: I0129 15:45:18.606042 4757 scope.go:117] "RemoveContainer" containerID="ce5584904f2aac45769eeae86ca2f50a20f1001f03097f6ae76ce2340eca5c5d" Jan 29 15:45:19 crc kubenswrapper[4757]: I0129 15:45:19.613812 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e"} Jan 29 15:45:21 crc kubenswrapper[4757]: I0129 15:45:21.565314 4757 scope.go:117] "RemoveContainer" containerID="70445d3a6be4b1bc25e607c9d71e752774df96544d331f1b0f373c0d9ffd4967" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.389213 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dqgdw"] Jan 29 15:47:47 crc kubenswrapper[4757]: E0129 15:47:47.390026 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82c00fc1-c9d8-4bdd-93af-a56ffe57cd32" containerName="collect-profiles" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.390040 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="82c00fc1-c9d8-4bdd-93af-a56ffe57cd32" containerName="collect-profiles" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.390221 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="82c00fc1-c9d8-4bdd-93af-a56ffe57cd32" containerName="collect-profiles" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.392254 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.425358 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqgdw"] Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.470004 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4zmc\" (UniqueName: \"kubernetes.io/projected/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-kube-api-access-z4zmc\") pod \"redhat-operators-dqgdw\" (UID: \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\") " pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.470073 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-catalog-content\") pod \"redhat-operators-dqgdw\" (UID: \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\") " pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.470319 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-utilities\") pod \"redhat-operators-dqgdw\" (UID: \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\") " pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.574809 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-catalog-content\") pod \"redhat-operators-dqgdw\" (UID: \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\") " pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.574859 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4zmc\" (UniqueName: \"kubernetes.io/projected/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-kube-api-access-z4zmc\") pod \"redhat-operators-dqgdw\" (UID: \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\") " pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.574896 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-utilities\") pod \"redhat-operators-dqgdw\" (UID: \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\") " pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.575322 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-utilities\") pod \"redhat-operators-dqgdw\" (UID: \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\") " pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.575363 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-catalog-content\") pod \"redhat-operators-dqgdw\" (UID: \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\") " pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.604537 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.605021 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.606364 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4zmc\" (UniqueName: \"kubernetes.io/projected/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-kube-api-access-z4zmc\") pod \"redhat-operators-dqgdw\" (UID: \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\") " pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:47:47 crc kubenswrapper[4757]: I0129 15:47:47.712568 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:47:48 crc kubenswrapper[4757]: I0129 15:47:48.137096 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqgdw"] Jan 29 15:47:48 crc kubenswrapper[4757]: W0129 15:47:48.148630 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4804ec2_fd33_4ee7_81fd_a39d5688dfbf.slice/crio-4ba414233699cb28957eb9c26f746cd35978c4b104782be8ab5f69ca74a230b5 WatchSource:0}: Error finding container 4ba414233699cb28957eb9c26f746cd35978c4b104782be8ab5f69ca74a230b5: Status 404 returned error can't find the container with id 4ba414233699cb28957eb9c26f746cd35978c4b104782be8ab5f69ca74a230b5 Jan 29 15:47:48 crc kubenswrapper[4757]: I0129 15:47:48.731610 4757 generic.go:334] "Generic (PLEG): container finished" podID="b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" containerID="2fa9a2b407ca30f7f2b46df32d84faed05c574ea70a0dedab2082f8548225ff8" exitCode=0 Jan 29 15:47:48 crc kubenswrapper[4757]: I0129 15:47:48.731664 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqgdw" event={"ID":"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf","Type":"ContainerDied","Data":"2fa9a2b407ca30f7f2b46df32d84faed05c574ea70a0dedab2082f8548225ff8"} Jan 29 15:47:48 crc kubenswrapper[4757]: I0129 15:47:48.731696 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqgdw" event={"ID":"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf","Type":"ContainerStarted","Data":"4ba414233699cb28957eb9c26f746cd35978c4b104782be8ab5f69ca74a230b5"} Jan 29 15:47:48 crc kubenswrapper[4757]: I0129 15:47:48.733888 4757 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:47:50 crc kubenswrapper[4757]: I0129 15:47:50.756746 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqgdw" event={"ID":"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf","Type":"ContainerStarted","Data":"be98856053e73937604c5b0dded15ae2a5f59374b6cbbfbb2025d8dc8f8ae1e8"} Jan 29 15:47:51 crc kubenswrapper[4757]: I0129 15:47:51.768733 4757 generic.go:334] "Generic (PLEG): container finished" podID="b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" containerID="be98856053e73937604c5b0dded15ae2a5f59374b6cbbfbb2025d8dc8f8ae1e8" exitCode=0 Jan 29 15:47:51 crc kubenswrapper[4757]: I0129 15:47:51.768785 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqgdw" event={"ID":"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf","Type":"ContainerDied","Data":"be98856053e73937604c5b0dded15ae2a5f59374b6cbbfbb2025d8dc8f8ae1e8"} Jan 29 15:47:53 crc kubenswrapper[4757]: I0129 15:47:53.787822 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqgdw" event={"ID":"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf","Type":"ContainerStarted","Data":"9ee15df8feaeab1f15bbdf243fab1ea75eafd20c0eb546bc0e5f630d15b3d954"} Jan 29 15:47:53 crc kubenswrapper[4757]: I0129 15:47:53.817054 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dqgdw" podStartSLOduration=2.767136698 podStartE2EDuration="6.817033205s" podCreationTimestamp="2026-01-29 15:47:47 +0000 UTC" firstStartedPulling="2026-01-29 15:47:48.733588085 +0000 UTC m=+2232.022838322" lastFinishedPulling="2026-01-29 15:47:52.783484592 +0000 UTC m=+2236.072734829" observedRunningTime="2026-01-29 15:47:53.813613249 +0000 UTC m=+2237.102863486" watchObservedRunningTime="2026-01-29 15:47:53.817033205 +0000 UTC m=+2237.106283442" Jan 29 15:47:57 crc kubenswrapper[4757]: I0129 15:47:57.714186 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:47:57 crc kubenswrapper[4757]: I0129 15:47:57.716079 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:47:58 crc kubenswrapper[4757]: I0129 15:47:58.772366 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dqgdw" podUID="b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" containerName="registry-server" probeResult="failure" output=< Jan 29 15:47:58 crc kubenswrapper[4757]: timeout: failed to connect service ":50051" within 1s Jan 29 15:47:58 crc kubenswrapper[4757]: > Jan 29 15:48:07 crc kubenswrapper[4757]: I0129 15:48:07.764440 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:48:07 crc kubenswrapper[4757]: I0129 15:48:07.827031 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:48:08 crc kubenswrapper[4757]: I0129 15:48:08.819432 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqgdw"] Jan 29 15:48:08 crc kubenswrapper[4757]: I0129 15:48:08.900748 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dqgdw" podUID="b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" containerName="registry-server" containerID="cri-o://9ee15df8feaeab1f15bbdf243fab1ea75eafd20c0eb546bc0e5f630d15b3d954" gracePeriod=2 Jan 29 15:48:09 crc kubenswrapper[4757]: I0129 15:48:09.912478 4757 generic.go:334] "Generic (PLEG): container finished" podID="b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" containerID="9ee15df8feaeab1f15bbdf243fab1ea75eafd20c0eb546bc0e5f630d15b3d954" exitCode=0 Jan 29 15:48:09 crc kubenswrapper[4757]: I0129 15:48:09.912754 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqgdw" event={"ID":"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf","Type":"ContainerDied","Data":"9ee15df8feaeab1f15bbdf243fab1ea75eafd20c0eb546bc0e5f630d15b3d954"} Jan 29 15:48:09 crc kubenswrapper[4757]: I0129 15:48:09.978658 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.019587 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-utilities\") pod \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\" (UID: \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\") " Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.019672 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-catalog-content\") pod \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\" (UID: \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\") " Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.019912 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4zmc\" (UniqueName: \"kubernetes.io/projected/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-kube-api-access-z4zmc\") pod \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\" (UID: \"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf\") " Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.020763 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-utilities" (OuterVolumeSpecName: "utilities") pod "b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" (UID: "b4804ec2-fd33-4ee7-81fd-a39d5688dfbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.024933 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-kube-api-access-z4zmc" (OuterVolumeSpecName: "kube-api-access-z4zmc") pod "b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" (UID: "b4804ec2-fd33-4ee7-81fd-a39d5688dfbf"). InnerVolumeSpecName "kube-api-access-z4zmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.122490 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4zmc\" (UniqueName: \"kubernetes.io/projected/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-kube-api-access-z4zmc\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.122787 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.145331 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" (UID: "b4804ec2-fd33-4ee7-81fd-a39d5688dfbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.224178 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.921352 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqgdw" event={"ID":"b4804ec2-fd33-4ee7-81fd-a39d5688dfbf","Type":"ContainerDied","Data":"4ba414233699cb28957eb9c26f746cd35978c4b104782be8ab5f69ca74a230b5"} Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.921686 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqgdw" Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.921738 4757 scope.go:117] "RemoveContainer" containerID="9ee15df8feaeab1f15bbdf243fab1ea75eafd20c0eb546bc0e5f630d15b3d954" Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.938010 4757 scope.go:117] "RemoveContainer" containerID="be98856053e73937604c5b0dded15ae2a5f59374b6cbbfbb2025d8dc8f8ae1e8" Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.958850 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqgdw"] Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.969626 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dqgdw"] Jan 29 15:48:10 crc kubenswrapper[4757]: I0129 15:48:10.986487 4757 scope.go:117] "RemoveContainer" containerID="2fa9a2b407ca30f7f2b46df32d84faed05c574ea70a0dedab2082f8548225ff8" Jan 29 15:48:11 crc kubenswrapper[4757]: I0129 15:48:11.409691 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" path="/var/lib/kubelet/pods/b4804ec2-fd33-4ee7-81fd-a39d5688dfbf/volumes" Jan 29 15:48:17 crc kubenswrapper[4757]: I0129 15:48:17.605350 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:48:17 crc kubenswrapper[4757]: I0129 15:48:17.605682 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:48:47 crc kubenswrapper[4757]: I0129 15:48:47.604802 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:48:47 crc kubenswrapper[4757]: I0129 15:48:47.605377 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:48:47 crc kubenswrapper[4757]: I0129 15:48:47.605430 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:48:47 crc kubenswrapper[4757]: I0129 15:48:47.606165 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e"} pod="openshift-machine-config-operator/machine-config-daemon-45q8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:48:47 crc kubenswrapper[4757]: I0129 15:48:47.606253 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" containerID="cri-o://7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" gracePeriod=600 Jan 29 15:48:47 crc kubenswrapper[4757]: E0129 15:48:47.749822 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:48:48 crc kubenswrapper[4757]: I0129 15:48:48.180547 4757 generic.go:334] "Generic (PLEG): container finished" podID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" exitCode=0 Jan 29 15:48:48 crc kubenswrapper[4757]: I0129 15:48:48.180587 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerDied","Data":"7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e"} Jan 29 15:48:48 crc kubenswrapper[4757]: I0129 15:48:48.180622 4757 scope.go:117] "RemoveContainer" containerID="60fa87b617c1542c879897bee41087b09b00b7c22cd079c2dbce29eda0b6c165" Jan 29 15:48:48 crc kubenswrapper[4757]: I0129 15:48:48.181139 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:48:48 crc kubenswrapper[4757]: E0129 15:48:48.181408 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:48:59 crc kubenswrapper[4757]: I0129 15:48:59.396877 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:48:59 crc kubenswrapper[4757]: E0129 15:48:59.397613 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:49:11 crc kubenswrapper[4757]: I0129 15:49:11.397182 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:49:11 crc kubenswrapper[4757]: E0129 15:49:11.397955 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:49:22 crc kubenswrapper[4757]: I0129 15:49:22.397023 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:49:22 crc kubenswrapper[4757]: E0129 15:49:22.398337 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:49:35 crc kubenswrapper[4757]: I0129 15:49:35.961887 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xgns8"] Jan 29 15:49:35 crc kubenswrapper[4757]: E0129 15:49:35.962823 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" containerName="extract-utilities" Jan 29 15:49:35 crc kubenswrapper[4757]: I0129 15:49:35.962838 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" containerName="extract-utilities" Jan 29 15:49:35 crc kubenswrapper[4757]: E0129 15:49:35.962851 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" containerName="registry-server" Jan 29 15:49:35 crc kubenswrapper[4757]: I0129 15:49:35.962859 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" containerName="registry-server" Jan 29 15:49:35 crc kubenswrapper[4757]: E0129 15:49:35.962884 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" containerName="extract-content" Jan 29 15:49:35 crc kubenswrapper[4757]: I0129 15:49:35.962891 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" containerName="extract-content" Jan 29 15:49:35 crc kubenswrapper[4757]: I0129 15:49:35.963048 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4804ec2-fd33-4ee7-81fd-a39d5688dfbf" containerName="registry-server" Jan 29 15:49:35 crc kubenswrapper[4757]: I0129 15:49:35.964195 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:35 crc kubenswrapper[4757]: I0129 15:49:35.966103 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgns8"] Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.131880 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-967n4\" (UniqueName: \"kubernetes.io/projected/f122e38c-4c1c-449a-a2d1-c11e8d642caf-kube-api-access-967n4\") pod \"redhat-marketplace-xgns8\" (UID: \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\") " pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.132346 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f122e38c-4c1c-449a-a2d1-c11e8d642caf-utilities\") pod \"redhat-marketplace-xgns8\" (UID: \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\") " pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.132460 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f122e38c-4c1c-449a-a2d1-c11e8d642caf-catalog-content\") pod \"redhat-marketplace-xgns8\" (UID: \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\") " pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.157720 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h88cj"] Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.166042 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.181788 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h88cj"] Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.234184 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f122e38c-4c1c-449a-a2d1-c11e8d642caf-catalog-content\") pod \"redhat-marketplace-xgns8\" (UID: \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\") " pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.234249 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-967n4\" (UniqueName: \"kubernetes.io/projected/f122e38c-4c1c-449a-a2d1-c11e8d642caf-kube-api-access-967n4\") pod \"redhat-marketplace-xgns8\" (UID: \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\") " pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.234326 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f122e38c-4c1c-449a-a2d1-c11e8d642caf-utilities\") pod \"redhat-marketplace-xgns8\" (UID: \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\") " pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.234733 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f122e38c-4c1c-449a-a2d1-c11e8d642caf-catalog-content\") pod \"redhat-marketplace-xgns8\" (UID: \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\") " pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.235382 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f122e38c-4c1c-449a-a2d1-c11e8d642caf-utilities\") pod \"redhat-marketplace-xgns8\" (UID: \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\") " pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.257726 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-967n4\" (UniqueName: \"kubernetes.io/projected/f122e38c-4c1c-449a-a2d1-c11e8d642caf-kube-api-access-967n4\") pod \"redhat-marketplace-xgns8\" (UID: \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\") " pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.319586 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.336088 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33ac766-7155-4dad-899c-f5b3de1d9a80-utilities\") pod \"certified-operators-h88cj\" (UID: \"c33ac766-7155-4dad-899c-f5b3de1d9a80\") " pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.336562 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrgw9\" (UniqueName: \"kubernetes.io/projected/c33ac766-7155-4dad-899c-f5b3de1d9a80-kube-api-access-rrgw9\") pod \"certified-operators-h88cj\" (UID: \"c33ac766-7155-4dad-899c-f5b3de1d9a80\") " pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.336692 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33ac766-7155-4dad-899c-f5b3de1d9a80-catalog-content\") pod \"certified-operators-h88cj\" (UID: \"c33ac766-7155-4dad-899c-f5b3de1d9a80\") " pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.437977 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33ac766-7155-4dad-899c-f5b3de1d9a80-utilities\") pod \"certified-operators-h88cj\" (UID: \"c33ac766-7155-4dad-899c-f5b3de1d9a80\") " pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.438028 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrgw9\" (UniqueName: \"kubernetes.io/projected/c33ac766-7155-4dad-899c-f5b3de1d9a80-kube-api-access-rrgw9\") pod \"certified-operators-h88cj\" (UID: \"c33ac766-7155-4dad-899c-f5b3de1d9a80\") " pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.438068 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33ac766-7155-4dad-899c-f5b3de1d9a80-catalog-content\") pod \"certified-operators-h88cj\" (UID: \"c33ac766-7155-4dad-899c-f5b3de1d9a80\") " pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.438576 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33ac766-7155-4dad-899c-f5b3de1d9a80-catalog-content\") pod \"certified-operators-h88cj\" (UID: \"c33ac766-7155-4dad-899c-f5b3de1d9a80\") " pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.438665 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33ac766-7155-4dad-899c-f5b3de1d9a80-utilities\") pod \"certified-operators-h88cj\" (UID: \"c33ac766-7155-4dad-899c-f5b3de1d9a80\") " pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.478899 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrgw9\" (UniqueName: \"kubernetes.io/projected/c33ac766-7155-4dad-899c-f5b3de1d9a80-kube-api-access-rrgw9\") pod \"certified-operators-h88cj\" (UID: \"c33ac766-7155-4dad-899c-f5b3de1d9a80\") " pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.491077 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:36 crc kubenswrapper[4757]: I0129 15:49:36.630364 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgns8"] Jan 29 15:49:37 crc kubenswrapper[4757]: I0129 15:49:37.136352 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h88cj"] Jan 29 15:49:37 crc kubenswrapper[4757]: W0129 15:49:37.136998 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc33ac766_7155_4dad_899c_f5b3de1d9a80.slice/crio-ee3fecfd4da80307c63e53ec7a71db3b695e29b187f703edc7a7a488d3423b09 WatchSource:0}: Error finding container ee3fecfd4da80307c63e53ec7a71db3b695e29b187f703edc7a7a488d3423b09: Status 404 returned error can't find the container with id ee3fecfd4da80307c63e53ec7a71db3b695e29b187f703edc7a7a488d3423b09 Jan 29 15:49:37 crc kubenswrapper[4757]: I0129 15:49:37.411120 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:49:37 crc kubenswrapper[4757]: E0129 15:49:37.411333 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:49:37 crc kubenswrapper[4757]: I0129 15:49:37.510353 4757 generic.go:334] "Generic (PLEG): container finished" podID="f122e38c-4c1c-449a-a2d1-c11e8d642caf" containerID="3c5764932a7d1f4fa604a318056fe0e386b64d64b230f9f9ee58ade0f6572703" exitCode=0 Jan 29 15:49:37 crc kubenswrapper[4757]: I0129 15:49:37.510429 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgns8" event={"ID":"f122e38c-4c1c-449a-a2d1-c11e8d642caf","Type":"ContainerDied","Data":"3c5764932a7d1f4fa604a318056fe0e386b64d64b230f9f9ee58ade0f6572703"} Jan 29 15:49:37 crc kubenswrapper[4757]: I0129 15:49:37.511242 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgns8" event={"ID":"f122e38c-4c1c-449a-a2d1-c11e8d642caf","Type":"ContainerStarted","Data":"44985c5a366b4e9c1686ac30fd15cf54e99c89ed5f2eb1f6d0d4931f0d7dc1d7"} Jan 29 15:49:37 crc kubenswrapper[4757]: I0129 15:49:37.513252 4757 generic.go:334] "Generic (PLEG): container finished" podID="c33ac766-7155-4dad-899c-f5b3de1d9a80" containerID="bba77ae139210fbf609a5e33392d3f120e6a2557e2ebb31d7e72f4cb8282b791" exitCode=0 Jan 29 15:49:37 crc kubenswrapper[4757]: I0129 15:49:37.513304 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h88cj" event={"ID":"c33ac766-7155-4dad-899c-f5b3de1d9a80","Type":"ContainerDied","Data":"bba77ae139210fbf609a5e33392d3f120e6a2557e2ebb31d7e72f4cb8282b791"} Jan 29 15:49:37 crc kubenswrapper[4757]: I0129 15:49:37.513331 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h88cj" event={"ID":"c33ac766-7155-4dad-899c-f5b3de1d9a80","Type":"ContainerStarted","Data":"ee3fecfd4da80307c63e53ec7a71db3b695e29b187f703edc7a7a488d3423b09"} Jan 29 15:49:39 crc kubenswrapper[4757]: I0129 15:49:39.549916 4757 generic.go:334] "Generic (PLEG): container finished" podID="f122e38c-4c1c-449a-a2d1-c11e8d642caf" containerID="b8a8a94eb0449d8f5de80413882f210fff2c70720994f490d8745a8514551cbd" exitCode=0 Jan 29 15:49:39 crc kubenswrapper[4757]: I0129 15:49:39.550393 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgns8" event={"ID":"f122e38c-4c1c-449a-a2d1-c11e8d642caf","Type":"ContainerDied","Data":"b8a8a94eb0449d8f5de80413882f210fff2c70720994f490d8745a8514551cbd"} Jan 29 15:49:41 crc kubenswrapper[4757]: I0129 15:49:41.563886 4757 generic.go:334] "Generic (PLEG): container finished" podID="c33ac766-7155-4dad-899c-f5b3de1d9a80" containerID="db6795782d87794a8d13762167460523ece381d4325033f52c3705fb971c02e3" exitCode=0 Jan 29 15:49:41 crc kubenswrapper[4757]: I0129 15:49:41.563961 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h88cj" event={"ID":"c33ac766-7155-4dad-899c-f5b3de1d9a80","Type":"ContainerDied","Data":"db6795782d87794a8d13762167460523ece381d4325033f52c3705fb971c02e3"} Jan 29 15:49:43 crc kubenswrapper[4757]: I0129 15:49:43.584182 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgns8" event={"ID":"f122e38c-4c1c-449a-a2d1-c11e8d642caf","Type":"ContainerStarted","Data":"484a4a031e336951007640a5c298ba33f670c30c2f0b195de860dd029254773c"} Jan 29 15:49:43 crc kubenswrapper[4757]: I0129 15:49:43.586224 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h88cj" event={"ID":"c33ac766-7155-4dad-899c-f5b3de1d9a80","Type":"ContainerStarted","Data":"52e9e45f3644bcbd202b5ee5275d7e4ae8e8f1da488b48516598a4c23b88e1b7"} Jan 29 15:49:43 crc kubenswrapper[4757]: I0129 15:49:43.613099 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xgns8" podStartSLOduration=3.884172183 podStartE2EDuration="8.613070356s" podCreationTimestamp="2026-01-29 15:49:35 +0000 UTC" firstStartedPulling="2026-01-29 15:49:37.511787699 +0000 UTC m=+2340.801037936" lastFinishedPulling="2026-01-29 15:49:42.240685872 +0000 UTC m=+2345.529936109" observedRunningTime="2026-01-29 15:49:43.601834231 +0000 UTC m=+2346.891084508" watchObservedRunningTime="2026-01-29 15:49:43.613070356 +0000 UTC m=+2346.902320603" Jan 29 15:49:43 crc kubenswrapper[4757]: I0129 15:49:43.622437 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h88cj" podStartSLOduration=3.20240252 podStartE2EDuration="7.622418788s" podCreationTimestamp="2026-01-29 15:49:36 +0000 UTC" firstStartedPulling="2026-01-29 15:49:38.672675697 +0000 UTC m=+2341.961925944" lastFinishedPulling="2026-01-29 15:49:43.092691975 +0000 UTC m=+2346.381942212" observedRunningTime="2026-01-29 15:49:43.621987416 +0000 UTC m=+2346.911237663" watchObservedRunningTime="2026-01-29 15:49:43.622418788 +0000 UTC m=+2346.911669025" Jan 29 15:49:46 crc kubenswrapper[4757]: I0129 15:49:46.320675 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:46 crc kubenswrapper[4757]: I0129 15:49:46.321144 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:46 crc kubenswrapper[4757]: I0129 15:49:46.454708 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:46 crc kubenswrapper[4757]: I0129 15:49:46.491994 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:46 crc kubenswrapper[4757]: I0129 15:49:46.492248 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:46 crc kubenswrapper[4757]: I0129 15:49:46.529718 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:48 crc kubenswrapper[4757]: I0129 15:49:48.396516 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:49:48 crc kubenswrapper[4757]: E0129 15:49:48.396740 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:49:56 crc kubenswrapper[4757]: I0129 15:49:56.360598 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:56 crc kubenswrapper[4757]: I0129 15:49:56.405245 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgns8"] Jan 29 15:49:56 crc kubenswrapper[4757]: I0129 15:49:56.544334 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:56 crc kubenswrapper[4757]: I0129 15:49:56.685035 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xgns8" podUID="f122e38c-4c1c-449a-a2d1-c11e8d642caf" containerName="registry-server" containerID="cri-o://484a4a031e336951007640a5c298ba33f670c30c2f0b195de860dd029254773c" gracePeriod=2 Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.692529 4757 generic.go:334] "Generic (PLEG): container finished" podID="f122e38c-4c1c-449a-a2d1-c11e8d642caf" containerID="484a4a031e336951007640a5c298ba33f670c30c2f0b195de860dd029254773c" exitCode=0 Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.692599 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgns8" event={"ID":"f122e38c-4c1c-449a-a2d1-c11e8d642caf","Type":"ContainerDied","Data":"484a4a031e336951007640a5c298ba33f670c30c2f0b195de860dd029254773c"} Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.774679 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.872774 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-967n4\" (UniqueName: \"kubernetes.io/projected/f122e38c-4c1c-449a-a2d1-c11e8d642caf-kube-api-access-967n4\") pod \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\" (UID: \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\") " Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.872854 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f122e38c-4c1c-449a-a2d1-c11e8d642caf-catalog-content\") pod \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\" (UID: \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\") " Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.872918 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f122e38c-4c1c-449a-a2d1-c11e8d642caf-utilities\") pod \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\" (UID: \"f122e38c-4c1c-449a-a2d1-c11e8d642caf\") " Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.874763 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f122e38c-4c1c-449a-a2d1-c11e8d642caf-utilities" (OuterVolumeSpecName: "utilities") pod "f122e38c-4c1c-449a-a2d1-c11e8d642caf" (UID: "f122e38c-4c1c-449a-a2d1-c11e8d642caf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.879344 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f122e38c-4c1c-449a-a2d1-c11e8d642caf-kube-api-access-967n4" (OuterVolumeSpecName: "kube-api-access-967n4") pod "f122e38c-4c1c-449a-a2d1-c11e8d642caf" (UID: "f122e38c-4c1c-449a-a2d1-c11e8d642caf"). InnerVolumeSpecName "kube-api-access-967n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.909597 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f122e38c-4c1c-449a-a2d1-c11e8d642caf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f122e38c-4c1c-449a-a2d1-c11e8d642caf" (UID: "f122e38c-4c1c-449a-a2d1-c11e8d642caf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.975377 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-967n4\" (UniqueName: \"kubernetes.io/projected/f122e38c-4c1c-449a-a2d1-c11e8d642caf-kube-api-access-967n4\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.975405 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f122e38c-4c1c-449a-a2d1-c11e8d642caf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.975416 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f122e38c-4c1c-449a-a2d1-c11e8d642caf-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.993195 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h88cj"] Jan 29 15:49:57 crc kubenswrapper[4757]: I0129 15:49:57.993488 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h88cj" podUID="c33ac766-7155-4dad-899c-f5b3de1d9a80" containerName="registry-server" containerID="cri-o://52e9e45f3644bcbd202b5ee5275d7e4ae8e8f1da488b48516598a4c23b88e1b7" gracePeriod=2 Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.611973 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.687486 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33ac766-7155-4dad-899c-f5b3de1d9a80-utilities\") pod \"c33ac766-7155-4dad-899c-f5b3de1d9a80\" (UID: \"c33ac766-7155-4dad-899c-f5b3de1d9a80\") " Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.687662 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrgw9\" (UniqueName: \"kubernetes.io/projected/c33ac766-7155-4dad-899c-f5b3de1d9a80-kube-api-access-rrgw9\") pod \"c33ac766-7155-4dad-899c-f5b3de1d9a80\" (UID: \"c33ac766-7155-4dad-899c-f5b3de1d9a80\") " Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.687728 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33ac766-7155-4dad-899c-f5b3de1d9a80-catalog-content\") pod \"c33ac766-7155-4dad-899c-f5b3de1d9a80\" (UID: \"c33ac766-7155-4dad-899c-f5b3de1d9a80\") " Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.688723 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c33ac766-7155-4dad-899c-f5b3de1d9a80-utilities" (OuterVolumeSpecName: "utilities") pod "c33ac766-7155-4dad-899c-f5b3de1d9a80" (UID: "c33ac766-7155-4dad-899c-f5b3de1d9a80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.707608 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33ac766-7155-4dad-899c-f5b3de1d9a80-kube-api-access-rrgw9" (OuterVolumeSpecName: "kube-api-access-rrgw9") pod "c33ac766-7155-4dad-899c-f5b3de1d9a80" (UID: "c33ac766-7155-4dad-899c-f5b3de1d9a80"). InnerVolumeSpecName "kube-api-access-rrgw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.713773 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xgns8" event={"ID":"f122e38c-4c1c-449a-a2d1-c11e8d642caf","Type":"ContainerDied","Data":"44985c5a366b4e9c1686ac30fd15cf54e99c89ed5f2eb1f6d0d4931f0d7dc1d7"} Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.713829 4757 scope.go:117] "RemoveContainer" containerID="484a4a031e336951007640a5c298ba33f670c30c2f0b195de860dd029254773c" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.713864 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xgns8" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.719177 4757 generic.go:334] "Generic (PLEG): container finished" podID="c33ac766-7155-4dad-899c-f5b3de1d9a80" containerID="52e9e45f3644bcbd202b5ee5275d7e4ae8e8f1da488b48516598a4c23b88e1b7" exitCode=0 Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.719207 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h88cj" event={"ID":"c33ac766-7155-4dad-899c-f5b3de1d9a80","Type":"ContainerDied","Data":"52e9e45f3644bcbd202b5ee5275d7e4ae8e8f1da488b48516598a4c23b88e1b7"} Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.719219 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h88cj" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.719230 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h88cj" event={"ID":"c33ac766-7155-4dad-899c-f5b3de1d9a80","Type":"ContainerDied","Data":"ee3fecfd4da80307c63e53ec7a71db3b695e29b187f703edc7a7a488d3423b09"} Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.734142 4757 scope.go:117] "RemoveContainer" containerID="b8a8a94eb0449d8f5de80413882f210fff2c70720994f490d8745a8514551cbd" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.735382 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c33ac766-7155-4dad-899c-f5b3de1d9a80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c33ac766-7155-4dad-899c-f5b3de1d9a80" (UID: "c33ac766-7155-4dad-899c-f5b3de1d9a80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.752121 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgns8"] Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.761634 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xgns8"] Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.765458 4757 scope.go:117] "RemoveContainer" containerID="3c5764932a7d1f4fa604a318056fe0e386b64d64b230f9f9ee58ade0f6572703" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.789571 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c33ac766-7155-4dad-899c-f5b3de1d9a80-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.789824 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c33ac766-7155-4dad-899c-f5b3de1d9a80-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.789936 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrgw9\" (UniqueName: \"kubernetes.io/projected/c33ac766-7155-4dad-899c-f5b3de1d9a80-kube-api-access-rrgw9\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.829176 4757 scope.go:117] "RemoveContainer" containerID="52e9e45f3644bcbd202b5ee5275d7e4ae8e8f1da488b48516598a4c23b88e1b7" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.847696 4757 scope.go:117] "RemoveContainer" containerID="db6795782d87794a8d13762167460523ece381d4325033f52c3705fb971c02e3" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.868793 4757 scope.go:117] "RemoveContainer" containerID="bba77ae139210fbf609a5e33392d3f120e6a2557e2ebb31d7e72f4cb8282b791" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.893111 4757 scope.go:117] "RemoveContainer" containerID="52e9e45f3644bcbd202b5ee5275d7e4ae8e8f1da488b48516598a4c23b88e1b7" Jan 29 15:49:58 crc kubenswrapper[4757]: E0129 15:49:58.898436 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e9e45f3644bcbd202b5ee5275d7e4ae8e8f1da488b48516598a4c23b88e1b7\": container with ID starting with 52e9e45f3644bcbd202b5ee5275d7e4ae8e8f1da488b48516598a4c23b88e1b7 not found: ID does not exist" containerID="52e9e45f3644bcbd202b5ee5275d7e4ae8e8f1da488b48516598a4c23b88e1b7" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.898530 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e9e45f3644bcbd202b5ee5275d7e4ae8e8f1da488b48516598a4c23b88e1b7"} err="failed to get container status \"52e9e45f3644bcbd202b5ee5275d7e4ae8e8f1da488b48516598a4c23b88e1b7\": rpc error: code = NotFound desc = could not find container \"52e9e45f3644bcbd202b5ee5275d7e4ae8e8f1da488b48516598a4c23b88e1b7\": container with ID starting with 52e9e45f3644bcbd202b5ee5275d7e4ae8e8f1da488b48516598a4c23b88e1b7 not found: ID does not exist" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.898584 4757 scope.go:117] "RemoveContainer" containerID="db6795782d87794a8d13762167460523ece381d4325033f52c3705fb971c02e3" Jan 29 15:49:58 crc kubenswrapper[4757]: E0129 15:49:58.899017 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6795782d87794a8d13762167460523ece381d4325033f52c3705fb971c02e3\": container with ID starting with db6795782d87794a8d13762167460523ece381d4325033f52c3705fb971c02e3 not found: ID does not exist" containerID="db6795782d87794a8d13762167460523ece381d4325033f52c3705fb971c02e3" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.899044 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6795782d87794a8d13762167460523ece381d4325033f52c3705fb971c02e3"} err="failed to get container status \"db6795782d87794a8d13762167460523ece381d4325033f52c3705fb971c02e3\": rpc error: code = NotFound desc = could not find container \"db6795782d87794a8d13762167460523ece381d4325033f52c3705fb971c02e3\": container with ID starting with db6795782d87794a8d13762167460523ece381d4325033f52c3705fb971c02e3 not found: ID does not exist" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.899103 4757 scope.go:117] "RemoveContainer" containerID="bba77ae139210fbf609a5e33392d3f120e6a2557e2ebb31d7e72f4cb8282b791" Jan 29 15:49:58 crc kubenswrapper[4757]: E0129 15:49:58.899542 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba77ae139210fbf609a5e33392d3f120e6a2557e2ebb31d7e72f4cb8282b791\": container with ID starting with bba77ae139210fbf609a5e33392d3f120e6a2557e2ebb31d7e72f4cb8282b791 not found: ID does not exist" containerID="bba77ae139210fbf609a5e33392d3f120e6a2557e2ebb31d7e72f4cb8282b791" Jan 29 15:49:58 crc kubenswrapper[4757]: I0129 15:49:58.899592 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba77ae139210fbf609a5e33392d3f120e6a2557e2ebb31d7e72f4cb8282b791"} err="failed to get container status \"bba77ae139210fbf609a5e33392d3f120e6a2557e2ebb31d7e72f4cb8282b791\": rpc error: code = NotFound desc = could not find container \"bba77ae139210fbf609a5e33392d3f120e6a2557e2ebb31d7e72f4cb8282b791\": container with ID starting with bba77ae139210fbf609a5e33392d3f120e6a2557e2ebb31d7e72f4cb8282b791 not found: ID does not exist" Jan 29 15:49:59 crc kubenswrapper[4757]: I0129 15:49:59.054356 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h88cj"] Jan 29 15:49:59 crc kubenswrapper[4757]: I0129 15:49:59.064210 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h88cj"] Jan 29 15:49:59 crc kubenswrapper[4757]: I0129 15:49:59.405003 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33ac766-7155-4dad-899c-f5b3de1d9a80" path="/var/lib/kubelet/pods/c33ac766-7155-4dad-899c-f5b3de1d9a80/volumes" Jan 29 15:49:59 crc kubenswrapper[4757]: I0129 15:49:59.405878 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f122e38c-4c1c-449a-a2d1-c11e8d642caf" path="/var/lib/kubelet/pods/f122e38c-4c1c-449a-a2d1-c11e8d642caf/volumes" Jan 29 15:50:00 crc kubenswrapper[4757]: I0129 15:50:00.396123 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:50:00 crc kubenswrapper[4757]: E0129 15:50:00.396390 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:50:14 crc kubenswrapper[4757]: I0129 15:50:14.396651 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:50:14 crc kubenswrapper[4757]: E0129 15:50:14.397496 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:50:25 crc kubenswrapper[4757]: I0129 15:50:25.396027 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:50:25 crc kubenswrapper[4757]: E0129 15:50:25.396808 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:50:40 crc kubenswrapper[4757]: I0129 15:50:40.396250 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:50:40 crc kubenswrapper[4757]: E0129 15:50:40.397349 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:50:52 crc kubenswrapper[4757]: I0129 15:50:52.396759 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:50:52 crc kubenswrapper[4757]: E0129 15:50:52.398157 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:51:06 crc kubenswrapper[4757]: I0129 15:51:06.396516 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:51:06 crc kubenswrapper[4757]: E0129 15:51:06.397218 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:51:19 crc kubenswrapper[4757]: I0129 15:51:19.397282 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:51:19 crc kubenswrapper[4757]: E0129 15:51:19.398023 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:51:32 crc kubenswrapper[4757]: I0129 15:51:32.396578 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:51:32 crc kubenswrapper[4757]: E0129 15:51:32.397389 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:51:47 crc kubenswrapper[4757]: I0129 15:51:47.404696 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:51:47 crc kubenswrapper[4757]: E0129 15:51:47.405732 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:52:00 crc kubenswrapper[4757]: I0129 15:52:00.396253 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:52:00 crc kubenswrapper[4757]: E0129 15:52:00.396928 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:52:15 crc kubenswrapper[4757]: I0129 15:52:15.396238 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:52:15 crc kubenswrapper[4757]: E0129 15:52:15.397209 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:52:30 crc kubenswrapper[4757]: I0129 15:52:30.396548 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:52:30 crc kubenswrapper[4757]: E0129 15:52:30.398198 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:52:42 crc kubenswrapper[4757]: I0129 15:52:42.396256 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:52:42 crc kubenswrapper[4757]: E0129 15:52:42.397054 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:52:53 crc kubenswrapper[4757]: I0129 15:52:53.396901 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:52:53 crc kubenswrapper[4757]: E0129 15:52:53.398156 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.050129 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-22dg7"] Jan 29 15:52:54 crc kubenswrapper[4757]: E0129 15:52:54.050900 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33ac766-7155-4dad-899c-f5b3de1d9a80" containerName="registry-server" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.050929 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33ac766-7155-4dad-899c-f5b3de1d9a80" containerName="registry-server" Jan 29 15:52:54 crc kubenswrapper[4757]: E0129 15:52:54.050949 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33ac766-7155-4dad-899c-f5b3de1d9a80" containerName="extract-content" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.050962 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33ac766-7155-4dad-899c-f5b3de1d9a80" containerName="extract-content" Jan 29 15:52:54 crc kubenswrapper[4757]: E0129 15:52:54.050993 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f122e38c-4c1c-449a-a2d1-c11e8d642caf" containerName="extract-content" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.051005 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f122e38c-4c1c-449a-a2d1-c11e8d642caf" containerName="extract-content" Jan 29 15:52:54 crc kubenswrapper[4757]: E0129 15:52:54.051036 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f122e38c-4c1c-449a-a2d1-c11e8d642caf" containerName="extract-utilities" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.051048 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f122e38c-4c1c-449a-a2d1-c11e8d642caf" containerName="extract-utilities" Jan 29 15:52:54 crc kubenswrapper[4757]: E0129 15:52:54.051063 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f122e38c-4c1c-449a-a2d1-c11e8d642caf" containerName="registry-server" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.051075 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="f122e38c-4c1c-449a-a2d1-c11e8d642caf" containerName="registry-server" Jan 29 15:52:54 crc kubenswrapper[4757]: E0129 15:52:54.051094 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33ac766-7155-4dad-899c-f5b3de1d9a80" containerName="extract-utilities" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.051105 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33ac766-7155-4dad-899c-f5b3de1d9a80" containerName="extract-utilities" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.051343 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="f122e38c-4c1c-449a-a2d1-c11e8d642caf" containerName="registry-server" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.051380 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33ac766-7155-4dad-899c-f5b3de1d9a80" containerName="registry-server" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.053004 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.104810 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22dg7"] Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.168131 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94392983-47f9-473d-b531-2f859db5d702-catalog-content\") pod \"community-operators-22dg7\" (UID: \"94392983-47f9-473d-b531-2f859db5d702\") " pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.168439 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlntd\" (UniqueName: \"kubernetes.io/projected/94392983-47f9-473d-b531-2f859db5d702-kube-api-access-vlntd\") pod \"community-operators-22dg7\" (UID: \"94392983-47f9-473d-b531-2f859db5d702\") " pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.168565 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94392983-47f9-473d-b531-2f859db5d702-utilities\") pod \"community-operators-22dg7\" (UID: \"94392983-47f9-473d-b531-2f859db5d702\") " pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.278325 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94392983-47f9-473d-b531-2f859db5d702-catalog-content\") pod \"community-operators-22dg7\" (UID: \"94392983-47f9-473d-b531-2f859db5d702\") " pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.278619 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlntd\" (UniqueName: \"kubernetes.io/projected/94392983-47f9-473d-b531-2f859db5d702-kube-api-access-vlntd\") pod \"community-operators-22dg7\" (UID: \"94392983-47f9-473d-b531-2f859db5d702\") " pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.278750 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94392983-47f9-473d-b531-2f859db5d702-utilities\") pod \"community-operators-22dg7\" (UID: \"94392983-47f9-473d-b531-2f859db5d702\") " pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.279224 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94392983-47f9-473d-b531-2f859db5d702-utilities\") pod \"community-operators-22dg7\" (UID: \"94392983-47f9-473d-b531-2f859db5d702\") " pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.279226 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94392983-47f9-473d-b531-2f859db5d702-catalog-content\") pod \"community-operators-22dg7\" (UID: \"94392983-47f9-473d-b531-2f859db5d702\") " pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.300353 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlntd\" (UniqueName: \"kubernetes.io/projected/94392983-47f9-473d-b531-2f859db5d702-kube-api-access-vlntd\") pod \"community-operators-22dg7\" (UID: \"94392983-47f9-473d-b531-2f859db5d702\") " pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.375229 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:52:54 crc kubenswrapper[4757]: I0129 15:52:54.879415 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22dg7"] Jan 29 15:52:55 crc kubenswrapper[4757]: I0129 15:52:55.469424 4757 generic.go:334] "Generic (PLEG): container finished" podID="94392983-47f9-473d-b531-2f859db5d702" containerID="b1fff1be53c9aa77663e8fcb8d342831e2ed2f1964c9d6ec65d81a9b18fd5c53" exitCode=0 Jan 29 15:52:55 crc kubenswrapper[4757]: I0129 15:52:55.469506 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22dg7" event={"ID":"94392983-47f9-473d-b531-2f859db5d702","Type":"ContainerDied","Data":"b1fff1be53c9aa77663e8fcb8d342831e2ed2f1964c9d6ec65d81a9b18fd5c53"} Jan 29 15:52:55 crc kubenswrapper[4757]: I0129 15:52:55.469752 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22dg7" event={"ID":"94392983-47f9-473d-b531-2f859db5d702","Type":"ContainerStarted","Data":"9c7e025137526b2e9083ff74460b644a31abf62daee96595bf399e23c558fb78"} Jan 29 15:52:56 crc kubenswrapper[4757]: I0129 15:52:56.485341 4757 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:52:59 crc kubenswrapper[4757]: I0129 15:52:59.505987 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22dg7" event={"ID":"94392983-47f9-473d-b531-2f859db5d702","Type":"ContainerStarted","Data":"b003de06b2f1d6b36c4f707c04e1084ee1aeced8a64cf3393115bcb8cbda9f61"} Jan 29 15:53:00 crc kubenswrapper[4757]: I0129 15:53:00.514202 4757 generic.go:334] "Generic (PLEG): container finished" podID="94392983-47f9-473d-b531-2f859db5d702" containerID="b003de06b2f1d6b36c4f707c04e1084ee1aeced8a64cf3393115bcb8cbda9f61" exitCode=0 Jan 29 15:53:00 crc kubenswrapper[4757]: I0129 15:53:00.514244 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22dg7" event={"ID":"94392983-47f9-473d-b531-2f859db5d702","Type":"ContainerDied","Data":"b003de06b2f1d6b36c4f707c04e1084ee1aeced8a64cf3393115bcb8cbda9f61"} Jan 29 15:53:01 crc kubenswrapper[4757]: I0129 15:53:01.522568 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22dg7" event={"ID":"94392983-47f9-473d-b531-2f859db5d702","Type":"ContainerStarted","Data":"8018221d328586b9181c426bc02bd1a025cad79f7836ea5b864e95134ae23d37"} Jan 29 15:53:01 crc kubenswrapper[4757]: I0129 15:53:01.555063 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-22dg7" podStartSLOduration=2.938177712 podStartE2EDuration="7.555043907s" podCreationTimestamp="2026-01-29 15:52:54 +0000 UTC" firstStartedPulling="2026-01-29 15:52:56.483698069 +0000 UTC m=+2539.772948346" lastFinishedPulling="2026-01-29 15:53:01.100564304 +0000 UTC m=+2544.389814541" observedRunningTime="2026-01-29 15:53:01.550592773 +0000 UTC m=+2544.839843010" watchObservedRunningTime="2026-01-29 15:53:01.555043907 +0000 UTC m=+2544.844294144" Jan 29 15:53:04 crc kubenswrapper[4757]: I0129 15:53:04.376055 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:53:04 crc kubenswrapper[4757]: I0129 15:53:04.376461 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:53:04 crc kubenswrapper[4757]: I0129 15:53:04.439706 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:53:05 crc kubenswrapper[4757]: I0129 15:53:05.397013 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:53:05 crc kubenswrapper[4757]: E0129 15:53:05.397214 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:53:14 crc kubenswrapper[4757]: I0129 15:53:14.415793 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:53:14 crc kubenswrapper[4757]: I0129 15:53:14.462450 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22dg7"] Jan 29 15:53:14 crc kubenswrapper[4757]: I0129 15:53:14.656477 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-22dg7" podUID="94392983-47f9-473d-b531-2f859db5d702" containerName="registry-server" containerID="cri-o://8018221d328586b9181c426bc02bd1a025cad79f7836ea5b864e95134ae23d37" gracePeriod=2 Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.606614 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.664824 4757 generic.go:334] "Generic (PLEG): container finished" podID="94392983-47f9-473d-b531-2f859db5d702" containerID="8018221d328586b9181c426bc02bd1a025cad79f7836ea5b864e95134ae23d37" exitCode=0 Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.664880 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22dg7" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.664899 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22dg7" event={"ID":"94392983-47f9-473d-b531-2f859db5d702","Type":"ContainerDied","Data":"8018221d328586b9181c426bc02bd1a025cad79f7836ea5b864e95134ae23d37"} Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.665032 4757 scope.go:117] "RemoveContainer" containerID="8018221d328586b9181c426bc02bd1a025cad79f7836ea5b864e95134ae23d37" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.665130 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22dg7" event={"ID":"94392983-47f9-473d-b531-2f859db5d702","Type":"ContainerDied","Data":"9c7e025137526b2e9083ff74460b644a31abf62daee96595bf399e23c558fb78"} Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.681534 4757 scope.go:117] "RemoveContainer" containerID="b003de06b2f1d6b36c4f707c04e1084ee1aeced8a64cf3393115bcb8cbda9f61" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.702937 4757 scope.go:117] "RemoveContainer" containerID="b1fff1be53c9aa77663e8fcb8d342831e2ed2f1964c9d6ec65d81a9b18fd5c53" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.710874 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94392983-47f9-473d-b531-2f859db5d702-utilities\") pod \"94392983-47f9-473d-b531-2f859db5d702\" (UID: \"94392983-47f9-473d-b531-2f859db5d702\") " Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.710920 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94392983-47f9-473d-b531-2f859db5d702-catalog-content\") pod \"94392983-47f9-473d-b531-2f859db5d702\" (UID: \"94392983-47f9-473d-b531-2f859db5d702\") " Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.710965 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlntd\" (UniqueName: \"kubernetes.io/projected/94392983-47f9-473d-b531-2f859db5d702-kube-api-access-vlntd\") pod \"94392983-47f9-473d-b531-2f859db5d702\" (UID: \"94392983-47f9-473d-b531-2f859db5d702\") " Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.712115 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94392983-47f9-473d-b531-2f859db5d702-utilities" (OuterVolumeSpecName: "utilities") pod "94392983-47f9-473d-b531-2f859db5d702" (UID: "94392983-47f9-473d-b531-2f859db5d702"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.716771 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94392983-47f9-473d-b531-2f859db5d702-kube-api-access-vlntd" (OuterVolumeSpecName: "kube-api-access-vlntd") pod "94392983-47f9-473d-b531-2f859db5d702" (UID: "94392983-47f9-473d-b531-2f859db5d702"). InnerVolumeSpecName "kube-api-access-vlntd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.760726 4757 scope.go:117] "RemoveContainer" containerID="8018221d328586b9181c426bc02bd1a025cad79f7836ea5b864e95134ae23d37" Jan 29 15:53:15 crc kubenswrapper[4757]: E0129 15:53:15.761392 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8018221d328586b9181c426bc02bd1a025cad79f7836ea5b864e95134ae23d37\": container with ID starting with 8018221d328586b9181c426bc02bd1a025cad79f7836ea5b864e95134ae23d37 not found: ID does not exist" containerID="8018221d328586b9181c426bc02bd1a025cad79f7836ea5b864e95134ae23d37" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.761446 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8018221d328586b9181c426bc02bd1a025cad79f7836ea5b864e95134ae23d37"} err="failed to get container status \"8018221d328586b9181c426bc02bd1a025cad79f7836ea5b864e95134ae23d37\": rpc error: code = NotFound desc = could not find container \"8018221d328586b9181c426bc02bd1a025cad79f7836ea5b864e95134ae23d37\": container with ID starting with 8018221d328586b9181c426bc02bd1a025cad79f7836ea5b864e95134ae23d37 not found: ID does not exist" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.761474 4757 scope.go:117] "RemoveContainer" containerID="b003de06b2f1d6b36c4f707c04e1084ee1aeced8a64cf3393115bcb8cbda9f61" Jan 29 15:53:15 crc kubenswrapper[4757]: E0129 15:53:15.762155 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b003de06b2f1d6b36c4f707c04e1084ee1aeced8a64cf3393115bcb8cbda9f61\": container with ID starting with b003de06b2f1d6b36c4f707c04e1084ee1aeced8a64cf3393115bcb8cbda9f61 not found: ID does not exist" containerID="b003de06b2f1d6b36c4f707c04e1084ee1aeced8a64cf3393115bcb8cbda9f61" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.762197 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b003de06b2f1d6b36c4f707c04e1084ee1aeced8a64cf3393115bcb8cbda9f61"} err="failed to get container status \"b003de06b2f1d6b36c4f707c04e1084ee1aeced8a64cf3393115bcb8cbda9f61\": rpc error: code = NotFound desc = could not find container \"b003de06b2f1d6b36c4f707c04e1084ee1aeced8a64cf3393115bcb8cbda9f61\": container with ID starting with b003de06b2f1d6b36c4f707c04e1084ee1aeced8a64cf3393115bcb8cbda9f61 not found: ID does not exist" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.762226 4757 scope.go:117] "RemoveContainer" containerID="b1fff1be53c9aa77663e8fcb8d342831e2ed2f1964c9d6ec65d81a9b18fd5c53" Jan 29 15:53:15 crc kubenswrapper[4757]: E0129 15:53:15.762692 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1fff1be53c9aa77663e8fcb8d342831e2ed2f1964c9d6ec65d81a9b18fd5c53\": container with ID starting with b1fff1be53c9aa77663e8fcb8d342831e2ed2f1964c9d6ec65d81a9b18fd5c53 not found: ID does not exist" containerID="b1fff1be53c9aa77663e8fcb8d342831e2ed2f1964c9d6ec65d81a9b18fd5c53" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.762729 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1fff1be53c9aa77663e8fcb8d342831e2ed2f1964c9d6ec65d81a9b18fd5c53"} err="failed to get container status \"b1fff1be53c9aa77663e8fcb8d342831e2ed2f1964c9d6ec65d81a9b18fd5c53\": rpc error: code = NotFound desc = could not find container \"b1fff1be53c9aa77663e8fcb8d342831e2ed2f1964c9d6ec65d81a9b18fd5c53\": container with ID starting with b1fff1be53c9aa77663e8fcb8d342831e2ed2f1964c9d6ec65d81a9b18fd5c53 not found: ID does not exist" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.767382 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94392983-47f9-473d-b531-2f859db5d702-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94392983-47f9-473d-b531-2f859db5d702" (UID: "94392983-47f9-473d-b531-2f859db5d702"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.812565 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlntd\" (UniqueName: \"kubernetes.io/projected/94392983-47f9-473d-b531-2f859db5d702-kube-api-access-vlntd\") on node \"crc\" DevicePath \"\"" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.812856 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94392983-47f9-473d-b531-2f859db5d702-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:53:15 crc kubenswrapper[4757]: I0129 15:53:15.812941 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94392983-47f9-473d-b531-2f859db5d702-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:53:16 crc kubenswrapper[4757]: I0129 15:53:16.006420 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22dg7"] Jan 29 15:53:16 crc kubenswrapper[4757]: I0129 15:53:16.015644 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-22dg7"] Jan 29 15:53:16 crc kubenswrapper[4757]: E0129 15:53:16.110890 4757 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94392983_47f9_473d_b531_2f859db5d702.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94392983_47f9_473d_b531_2f859db5d702.slice/crio-9c7e025137526b2e9083ff74460b644a31abf62daee96595bf399e23c558fb78\": RecentStats: unable to find data in memory cache]" Jan 29 15:53:17 crc kubenswrapper[4757]: I0129 15:53:17.407469 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94392983-47f9-473d-b531-2f859db5d702" path="/var/lib/kubelet/pods/94392983-47f9-473d-b531-2f859db5d702/volumes" Jan 29 15:53:18 crc kubenswrapper[4757]: I0129 15:53:18.396749 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:53:18 crc kubenswrapper[4757]: E0129 15:53:18.397042 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:53:31 crc kubenswrapper[4757]: I0129 15:53:31.397162 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:53:31 crc kubenswrapper[4757]: E0129 15:53:31.397988 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:53:45 crc kubenswrapper[4757]: I0129 15:53:45.396914 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:53:45 crc kubenswrapper[4757]: E0129 15:53:45.397892 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 15:53:59 crc kubenswrapper[4757]: I0129 15:53:59.396309 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:54:00 crc kubenswrapper[4757]: I0129 15:54:00.019771 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"c6ddb6bfcb333cdc70bf5bce658712775e15e0c34232d155e9954087d4ada49d"} Jan 29 15:56:17 crc kubenswrapper[4757]: I0129 15:56:17.604511 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:56:17 crc kubenswrapper[4757]: I0129 15:56:17.605028 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:56:47 crc kubenswrapper[4757]: I0129 15:56:47.605396 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:56:47 crc kubenswrapper[4757]: I0129 15:56:47.606024 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:57:17 crc kubenswrapper[4757]: I0129 15:57:17.605329 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:57:17 crc kubenswrapper[4757]: I0129 15:57:17.605965 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:57:17 crc kubenswrapper[4757]: I0129 15:57:17.606019 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 15:57:17 crc kubenswrapper[4757]: I0129 15:57:17.606693 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6ddb6bfcb333cdc70bf5bce658712775e15e0c34232d155e9954087d4ada49d"} pod="openshift-machine-config-operator/machine-config-daemon-45q8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:57:17 crc kubenswrapper[4757]: I0129 15:57:17.606768 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" containerID="cri-o://c6ddb6bfcb333cdc70bf5bce658712775e15e0c34232d155e9954087d4ada49d" gracePeriod=600 Jan 29 15:57:18 crc kubenswrapper[4757]: I0129 15:57:18.501392 4757 generic.go:334] "Generic (PLEG): container finished" podID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerID="c6ddb6bfcb333cdc70bf5bce658712775e15e0c34232d155e9954087d4ada49d" exitCode=0 Jan 29 15:57:18 crc kubenswrapper[4757]: I0129 15:57:18.501473 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerDied","Data":"c6ddb6bfcb333cdc70bf5bce658712775e15e0c34232d155e9954087d4ada49d"} Jan 29 15:57:18 crc kubenswrapper[4757]: I0129 15:57:18.501871 4757 scope.go:117] "RemoveContainer" containerID="7cc1bab0f3746dca2aa416acdb9e357e54b2acea8b3d65cc3c104afca6e7794e" Jan 29 15:57:19 crc kubenswrapper[4757]: I0129 15:57:19.517320 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d"} Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.534493 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2zwsd"] Jan 29 15:59:36 crc kubenswrapper[4757]: E0129 15:59:36.535199 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94392983-47f9-473d-b531-2f859db5d702" containerName="extract-utilities" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.535211 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="94392983-47f9-473d-b531-2f859db5d702" containerName="extract-utilities" Jan 29 15:59:36 crc kubenswrapper[4757]: E0129 15:59:36.535223 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94392983-47f9-473d-b531-2f859db5d702" containerName="registry-server" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.535230 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="94392983-47f9-473d-b531-2f859db5d702" containerName="registry-server" Jan 29 15:59:36 crc kubenswrapper[4757]: E0129 15:59:36.535252 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94392983-47f9-473d-b531-2f859db5d702" containerName="extract-content" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.535258 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="94392983-47f9-473d-b531-2f859db5d702" containerName="extract-content" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.535400 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="94392983-47f9-473d-b531-2f859db5d702" containerName="registry-server" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.536331 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.541396 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zwsd"] Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.704398 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpmwk\" (UniqueName: \"kubernetes.io/projected/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-kube-api-access-tpmwk\") pod \"redhat-marketplace-2zwsd\" (UID: \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\") " pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.704819 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-utilities\") pod \"redhat-marketplace-2zwsd\" (UID: \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\") " pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.704996 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-catalog-content\") pod \"redhat-marketplace-2zwsd\" (UID: \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\") " pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.806205 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-catalog-content\") pod \"redhat-marketplace-2zwsd\" (UID: \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\") " pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.806303 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpmwk\" (UniqueName: \"kubernetes.io/projected/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-kube-api-access-tpmwk\") pod \"redhat-marketplace-2zwsd\" (UID: \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\") " pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.806430 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-utilities\") pod \"redhat-marketplace-2zwsd\" (UID: \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\") " pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.807031 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-utilities\") pod \"redhat-marketplace-2zwsd\" (UID: \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\") " pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.807076 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-catalog-content\") pod \"redhat-marketplace-2zwsd\" (UID: \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\") " pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.825739 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpmwk\" (UniqueName: \"kubernetes.io/projected/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-kube-api-access-tpmwk\") pod \"redhat-marketplace-2zwsd\" (UID: \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\") " pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 15:59:36 crc kubenswrapper[4757]: I0129 15:59:36.854725 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 15:59:37 crc kubenswrapper[4757]: I0129 15:59:37.088139 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zwsd"] Jan 29 15:59:37 crc kubenswrapper[4757]: I0129 15:59:37.536994 4757 generic.go:334] "Generic (PLEG): container finished" podID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" containerID="18b3dd3d24005451a7687855e077ff6bed19a6f7e3db00762ebbcd0d6a675803" exitCode=0 Jan 29 15:59:37 crc kubenswrapper[4757]: I0129 15:59:37.537086 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zwsd" event={"ID":"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0","Type":"ContainerDied","Data":"18b3dd3d24005451a7687855e077ff6bed19a6f7e3db00762ebbcd0d6a675803"} Jan 29 15:59:37 crc kubenswrapper[4757]: I0129 15:59:37.537314 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zwsd" event={"ID":"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0","Type":"ContainerStarted","Data":"7d0c244e740d08cadb9e4bfc6754851d13fc66a45e6cd7eb1cabfb2c784391b9"} Jan 29 15:59:37 crc kubenswrapper[4757]: I0129 15:59:37.539042 4757 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:59:37 crc kubenswrapper[4757]: E0129 15:59:37.667294 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:59:37 crc kubenswrapper[4757]: E0129 15:59:37.667470 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpmwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2zwsd_openshift-marketplace(3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:59:37 crc kubenswrapper[4757]: E0129 15:59:37.668692 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-2zwsd" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" Jan 29 15:59:38 crc kubenswrapper[4757]: E0129 15:59:38.546991 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2zwsd" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" Jan 29 15:59:47 crc kubenswrapper[4757]: I0129 15:59:47.605737 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:59:47 crc kubenswrapper[4757]: I0129 15:59:47.606440 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:59:49 crc kubenswrapper[4757]: E0129 15:59:49.531868 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:59:49 crc kubenswrapper[4757]: E0129 15:59:49.532108 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpmwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2zwsd_openshift-marketplace(3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:59:49 crc kubenswrapper[4757]: E0129 15:59:49.533238 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-2zwsd" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.200119 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh"] Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.202459 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.205672 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.206380 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.212828 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh"] Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.361090 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgmwr\" (UniqueName: \"kubernetes.io/projected/c7ef0bf6-078d-42ac-a9b2-97a44167a015-kube-api-access-mgmwr\") pod \"collect-profiles-29495040-26xlh\" (UID: \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.361444 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7ef0bf6-078d-42ac-a9b2-97a44167a015-secret-volume\") pod \"collect-profiles-29495040-26xlh\" (UID: \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.361514 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7ef0bf6-078d-42ac-a9b2-97a44167a015-config-volume\") pod \"collect-profiles-29495040-26xlh\" (UID: \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.463551 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7ef0bf6-078d-42ac-a9b2-97a44167a015-secret-volume\") pod \"collect-profiles-29495040-26xlh\" (UID: \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.463621 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7ef0bf6-078d-42ac-a9b2-97a44167a015-config-volume\") pod \"collect-profiles-29495040-26xlh\" (UID: \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.463656 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgmwr\" (UniqueName: \"kubernetes.io/projected/c7ef0bf6-078d-42ac-a9b2-97a44167a015-kube-api-access-mgmwr\") pod \"collect-profiles-29495040-26xlh\" (UID: \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.464683 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7ef0bf6-078d-42ac-a9b2-97a44167a015-config-volume\") pod \"collect-profiles-29495040-26xlh\" (UID: \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.469762 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7ef0bf6-078d-42ac-a9b2-97a44167a015-secret-volume\") pod \"collect-profiles-29495040-26xlh\" (UID: \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.482402 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgmwr\" (UniqueName: \"kubernetes.io/projected/c7ef0bf6-078d-42ac-a9b2-97a44167a015-kube-api-access-mgmwr\") pod \"collect-profiles-29495040-26xlh\" (UID: \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.537975 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" Jan 29 16:00:00 crc kubenswrapper[4757]: W0129 16:00:00.970415 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7ef0bf6_078d_42ac_a9b2_97a44167a015.slice/crio-d022f01a5e0e1d4182daf5945b6c69cc496902eceb46acb87e297efe0264100f WatchSource:0}: Error finding container d022f01a5e0e1d4182daf5945b6c69cc496902eceb46acb87e297efe0264100f: Status 404 returned error can't find the container with id d022f01a5e0e1d4182daf5945b6c69cc496902eceb46acb87e297efe0264100f Jan 29 16:00:00 crc kubenswrapper[4757]: I0129 16:00:00.971718 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh"] Jan 29 16:00:01 crc kubenswrapper[4757]: I0129 16:00:01.730565 4757 generic.go:334] "Generic (PLEG): container finished" podID="c7ef0bf6-078d-42ac-a9b2-97a44167a015" containerID="f8f65ffb557b2aca9592e39667a193fcf82db8780d116ad67f9a3378240dbafe" exitCode=0 Jan 29 16:00:01 crc kubenswrapper[4757]: I0129 16:00:01.730619 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" event={"ID":"c7ef0bf6-078d-42ac-a9b2-97a44167a015","Type":"ContainerDied","Data":"f8f65ffb557b2aca9592e39667a193fcf82db8780d116ad67f9a3378240dbafe"} Jan 29 16:00:01 crc kubenswrapper[4757]: I0129 16:00:01.730906 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" event={"ID":"c7ef0bf6-078d-42ac-a9b2-97a44167a015","Type":"ContainerStarted","Data":"d022f01a5e0e1d4182daf5945b6c69cc496902eceb46acb87e297efe0264100f"} Jan 29 16:00:02 crc kubenswrapper[4757]: E0129 16:00:02.398072 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2zwsd" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" Jan 29 16:00:02 crc kubenswrapper[4757]: I0129 16:00:02.974954 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" Jan 29 16:00:03 crc kubenswrapper[4757]: I0129 16:00:03.111683 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7ef0bf6-078d-42ac-a9b2-97a44167a015-secret-volume\") pod \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\" (UID: \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\") " Jan 29 16:00:03 crc kubenswrapper[4757]: I0129 16:00:03.112065 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgmwr\" (UniqueName: \"kubernetes.io/projected/c7ef0bf6-078d-42ac-a9b2-97a44167a015-kube-api-access-mgmwr\") pod \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\" (UID: \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\") " Jan 29 16:00:03 crc kubenswrapper[4757]: I0129 16:00:03.112107 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7ef0bf6-078d-42ac-a9b2-97a44167a015-config-volume\") pod \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\" (UID: \"c7ef0bf6-078d-42ac-a9b2-97a44167a015\") " Jan 29 16:00:03 crc kubenswrapper[4757]: I0129 16:00:03.112905 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7ef0bf6-078d-42ac-a9b2-97a44167a015-config-volume" (OuterVolumeSpecName: "config-volume") pod "c7ef0bf6-078d-42ac-a9b2-97a44167a015" (UID: "c7ef0bf6-078d-42ac-a9b2-97a44167a015"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:00:03 crc kubenswrapper[4757]: I0129 16:00:03.118115 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ef0bf6-078d-42ac-a9b2-97a44167a015-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c7ef0bf6-078d-42ac-a9b2-97a44167a015" (UID: "c7ef0bf6-078d-42ac-a9b2-97a44167a015"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:03 crc kubenswrapper[4757]: I0129 16:00:03.118424 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7ef0bf6-078d-42ac-a9b2-97a44167a015-kube-api-access-mgmwr" (OuterVolumeSpecName: "kube-api-access-mgmwr") pod "c7ef0bf6-078d-42ac-a9b2-97a44167a015" (UID: "c7ef0bf6-078d-42ac-a9b2-97a44167a015"). InnerVolumeSpecName "kube-api-access-mgmwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:00:03 crc kubenswrapper[4757]: I0129 16:00:03.213388 4757 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7ef0bf6-078d-42ac-a9b2-97a44167a015-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:03 crc kubenswrapper[4757]: I0129 16:00:03.213422 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgmwr\" (UniqueName: \"kubernetes.io/projected/c7ef0bf6-078d-42ac-a9b2-97a44167a015-kube-api-access-mgmwr\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:03 crc kubenswrapper[4757]: I0129 16:00:03.213433 4757 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7ef0bf6-078d-42ac-a9b2-97a44167a015-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:03 crc kubenswrapper[4757]: I0129 16:00:03.752631 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" event={"ID":"c7ef0bf6-078d-42ac-a9b2-97a44167a015","Type":"ContainerDied","Data":"d022f01a5e0e1d4182daf5945b6c69cc496902eceb46acb87e297efe0264100f"} Jan 29 16:00:03 crc kubenswrapper[4757]: I0129 16:00:03.752670 4757 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d022f01a5e0e1d4182daf5945b6c69cc496902eceb46acb87e297efe0264100f" Jan 29 16:00:03 crc kubenswrapper[4757]: I0129 16:00:03.752719 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-26xlh" Jan 29 16:00:04 crc kubenswrapper[4757]: I0129 16:00:04.049234 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx"] Jan 29 16:00:04 crc kubenswrapper[4757]: I0129 16:00:04.054421 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494995-ncxzx"] Jan 29 16:00:05 crc kubenswrapper[4757]: I0129 16:00:05.406748 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18027a76-8991-403e-8dec-d0115c4cb164" path="/var/lib/kubelet/pods/18027a76-8991-403e-8dec-d0115c4cb164/volumes" Jan 29 16:00:16 crc kubenswrapper[4757]: E0129 16:00:16.525441 4757 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:00:16 crc kubenswrapper[4757]: E0129 16:00:16.525983 4757 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpmwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2zwsd_openshift-marketplace(3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:00:16 crc kubenswrapper[4757]: E0129 16:00:16.527159 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-2zwsd" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" Jan 29 16:00:17 crc kubenswrapper[4757]: I0129 16:00:17.605043 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:00:17 crc kubenswrapper[4757]: I0129 16:00:17.605139 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:00:21 crc kubenswrapper[4757]: I0129 16:00:21.877073 4757 scope.go:117] "RemoveContainer" containerID="aa2d045fd021df6521000bca3bf6784d55ac4d235404f9bd5f47a93e9ec0b0f4" Jan 29 16:00:28 crc kubenswrapper[4757]: E0129 16:00:28.398299 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2zwsd" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" Jan 29 16:00:42 crc kubenswrapper[4757]: E0129 16:00:42.398375 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2zwsd" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" Jan 29 16:00:47 crc kubenswrapper[4757]: I0129 16:00:47.604343 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:00:47 crc kubenswrapper[4757]: I0129 16:00:47.605549 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:00:47 crc kubenswrapper[4757]: I0129 16:00:47.605612 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 16:00:47 crc kubenswrapper[4757]: I0129 16:00:47.606252 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d"} pod="openshift-machine-config-operator/machine-config-daemon-45q8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:00:47 crc kubenswrapper[4757]: I0129 16:00:47.606333 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" containerID="cri-o://341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" gracePeriod=600 Jan 29 16:00:47 crc kubenswrapper[4757]: E0129 16:00:47.732250 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:00:48 crc kubenswrapper[4757]: I0129 16:00:48.053452 4757 generic.go:334] "Generic (PLEG): container finished" podID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" exitCode=0 Jan 29 16:00:48 crc kubenswrapper[4757]: I0129 16:00:48.053504 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerDied","Data":"341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d"} Jan 29 16:00:48 crc kubenswrapper[4757]: I0129 16:00:48.053543 4757 scope.go:117] "RemoveContainer" containerID="c6ddb6bfcb333cdc70bf5bce658712775e15e0c34232d155e9954087d4ada49d" Jan 29 16:00:48 crc kubenswrapper[4757]: I0129 16:00:48.054108 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:00:48 crc kubenswrapper[4757]: E0129 16:00:48.054376 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:00:54 crc kubenswrapper[4757]: E0129 16:00:54.398578 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2zwsd" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.156077 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7l4x8"] Jan 29 16:00:58 crc kubenswrapper[4757]: E0129 16:00:58.156971 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7ef0bf6-078d-42ac-a9b2-97a44167a015" containerName="collect-profiles" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.156988 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7ef0bf6-078d-42ac-a9b2-97a44167a015" containerName="collect-profiles" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.157150 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7ef0bf6-078d-42ac-a9b2-97a44167a015" containerName="collect-profiles" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.158157 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.176959 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7l4x8"] Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.272331 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7bc7\" (UniqueName: \"kubernetes.io/projected/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-kube-api-access-x7bc7\") pod \"certified-operators-7l4x8\" (UID: \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\") " pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.272591 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-catalog-content\") pod \"certified-operators-7l4x8\" (UID: \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\") " pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.272696 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-utilities\") pod \"certified-operators-7l4x8\" (UID: \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\") " pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.374496 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7bc7\" (UniqueName: \"kubernetes.io/projected/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-kube-api-access-x7bc7\") pod \"certified-operators-7l4x8\" (UID: \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\") " pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.374795 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-catalog-content\") pod \"certified-operators-7l4x8\" (UID: \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\") " pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.374893 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-utilities\") pod \"certified-operators-7l4x8\" (UID: \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\") " pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.375375 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-catalog-content\") pod \"certified-operators-7l4x8\" (UID: \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\") " pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.375395 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-utilities\") pod \"certified-operators-7l4x8\" (UID: \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\") " pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.404199 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7bc7\" (UniqueName: \"kubernetes.io/projected/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-kube-api-access-x7bc7\") pod \"certified-operators-7l4x8\" (UID: \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\") " pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.477527 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:00:58 crc kubenswrapper[4757]: I0129 16:00:58.954139 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7l4x8"] Jan 29 16:00:59 crc kubenswrapper[4757]: I0129 16:00:59.121629 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7l4x8" event={"ID":"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a","Type":"ContainerStarted","Data":"5728671884c2d0868b3538beba1e6c9f40ecf05275e5feb20f83a6c8f0ded77f"} Jan 29 16:01:00 crc kubenswrapper[4757]: I0129 16:01:00.130447 4757 generic.go:334] "Generic (PLEG): container finished" podID="7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" containerID="92b16aa715f8959e9d75a981133a7d1a3576ce9d5b990a9744e4bd1cc4f0a492" exitCode=0 Jan 29 16:01:00 crc kubenswrapper[4757]: I0129 16:01:00.130526 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7l4x8" event={"ID":"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a","Type":"ContainerDied","Data":"92b16aa715f8959e9d75a981133a7d1a3576ce9d5b990a9744e4bd1cc4f0a492"} Jan 29 16:01:01 crc kubenswrapper[4757]: I0129 16:01:01.138168 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7l4x8" event={"ID":"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a","Type":"ContainerStarted","Data":"0af7645db62c75bff9a9e49f625e9dc17841b3b5c82c69f8db35518f1f5adbc7"} Jan 29 16:01:01 crc kubenswrapper[4757]: I0129 16:01:01.396053 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:01:01 crc kubenswrapper[4757]: E0129 16:01:01.396304 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:01:02 crc kubenswrapper[4757]: I0129 16:01:02.146953 4757 generic.go:334] "Generic (PLEG): container finished" podID="7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" containerID="0af7645db62c75bff9a9e49f625e9dc17841b3b5c82c69f8db35518f1f5adbc7" exitCode=0 Jan 29 16:01:02 crc kubenswrapper[4757]: I0129 16:01:02.147007 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7l4x8" event={"ID":"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a","Type":"ContainerDied","Data":"0af7645db62c75bff9a9e49f625e9dc17841b3b5c82c69f8db35518f1f5adbc7"} Jan 29 16:01:03 crc kubenswrapper[4757]: I0129 16:01:03.155370 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7l4x8" event={"ID":"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a","Type":"ContainerStarted","Data":"ac4af8839c39b14151340df99a388780529a213813225a2e7eb52b54642ea158"} Jan 29 16:01:03 crc kubenswrapper[4757]: I0129 16:01:03.182028 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7l4x8" podStartSLOduration=2.738573049 podStartE2EDuration="5.182013257s" podCreationTimestamp="2026-01-29 16:00:58 +0000 UTC" firstStartedPulling="2026-01-29 16:01:00.132642755 +0000 UTC m=+3023.421893002" lastFinishedPulling="2026-01-29 16:01:02.576082973 +0000 UTC m=+3025.865333210" observedRunningTime="2026-01-29 16:01:03.181987406 +0000 UTC m=+3026.471237643" watchObservedRunningTime="2026-01-29 16:01:03.182013257 +0000 UTC m=+3026.471263494" Jan 29 16:01:08 crc kubenswrapper[4757]: I0129 16:01:08.477739 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:01:08 crc kubenswrapper[4757]: I0129 16:01:08.478343 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:01:08 crc kubenswrapper[4757]: I0129 16:01:08.523112 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:01:09 crc kubenswrapper[4757]: I0129 16:01:09.230554 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zwsd" event={"ID":"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0","Type":"ContainerStarted","Data":"8ee243d8123e772a8246c2e5614707e98f61a6336732035673542524225aed3d"} Jan 29 16:01:09 crc kubenswrapper[4757]: I0129 16:01:09.388651 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:01:09 crc kubenswrapper[4757]: I0129 16:01:09.759002 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7l4x8"] Jan 29 16:01:10 crc kubenswrapper[4757]: I0129 16:01:10.237227 4757 generic.go:334] "Generic (PLEG): container finished" podID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" containerID="8ee243d8123e772a8246c2e5614707e98f61a6336732035673542524225aed3d" exitCode=0 Jan 29 16:01:10 crc kubenswrapper[4757]: I0129 16:01:10.237295 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zwsd" event={"ID":"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0","Type":"ContainerDied","Data":"8ee243d8123e772a8246c2e5614707e98f61a6336732035673542524225aed3d"} Jan 29 16:01:11 crc kubenswrapper[4757]: I0129 16:01:11.246646 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zwsd" event={"ID":"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0","Type":"ContainerStarted","Data":"def08b26ec0573a10640430bfce45a583e9e06f8331a9da856435d0c9e6f7740"} Jan 29 16:01:11 crc kubenswrapper[4757]: I0129 16:01:11.246836 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7l4x8" podUID="7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" containerName="registry-server" containerID="cri-o://ac4af8839c39b14151340df99a388780529a213813225a2e7eb52b54642ea158" gracePeriod=2 Jan 29 16:01:11 crc kubenswrapper[4757]: I0129 16:01:11.278584 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2zwsd" podStartSLOduration=2.171823718 podStartE2EDuration="1m35.27856482s" podCreationTimestamp="2026-01-29 15:59:36 +0000 UTC" firstStartedPulling="2026-01-29 15:59:37.53876489 +0000 UTC m=+2940.828015127" lastFinishedPulling="2026-01-29 16:01:10.645505992 +0000 UTC m=+3033.934756229" observedRunningTime="2026-01-29 16:01:11.276489191 +0000 UTC m=+3034.565739438" watchObservedRunningTime="2026-01-29 16:01:11.27856482 +0000 UTC m=+3034.567815057" Jan 29 16:01:11 crc kubenswrapper[4757]: I0129 16:01:11.676676 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:01:11 crc kubenswrapper[4757]: I0129 16:01:11.815244 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-catalog-content\") pod \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\" (UID: \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\") " Jan 29 16:01:11 crc kubenswrapper[4757]: I0129 16:01:11.815341 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-utilities\") pod \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\" (UID: \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\") " Jan 29 16:01:11 crc kubenswrapper[4757]: I0129 16:01:11.815394 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7bc7\" (UniqueName: \"kubernetes.io/projected/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-kube-api-access-x7bc7\") pod \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\" (UID: \"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a\") " Jan 29 16:01:11 crc kubenswrapper[4757]: I0129 16:01:11.816438 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-utilities" (OuterVolumeSpecName: "utilities") pod "7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" (UID: "7fdc5e7a-23a0-40c6-b5eb-0655bf54320a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:11 crc kubenswrapper[4757]: I0129 16:01:11.821557 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-kube-api-access-x7bc7" (OuterVolumeSpecName: "kube-api-access-x7bc7") pod "7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" (UID: "7fdc5e7a-23a0-40c6-b5eb-0655bf54320a"). InnerVolumeSpecName "kube-api-access-x7bc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:11 crc kubenswrapper[4757]: I0129 16:01:11.917763 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:11 crc kubenswrapper[4757]: I0129 16:01:11.917824 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7bc7\" (UniqueName: \"kubernetes.io/projected/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-kube-api-access-x7bc7\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:12 crc kubenswrapper[4757]: I0129 16:01:12.256799 4757 generic.go:334] "Generic (PLEG): container finished" podID="7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" containerID="ac4af8839c39b14151340df99a388780529a213813225a2e7eb52b54642ea158" exitCode=0 Jan 29 16:01:12 crc kubenswrapper[4757]: I0129 16:01:12.256875 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7l4x8" event={"ID":"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a","Type":"ContainerDied","Data":"ac4af8839c39b14151340df99a388780529a213813225a2e7eb52b54642ea158"} Jan 29 16:01:12 crc kubenswrapper[4757]: I0129 16:01:12.256924 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7l4x8" event={"ID":"7fdc5e7a-23a0-40c6-b5eb-0655bf54320a","Type":"ContainerDied","Data":"5728671884c2d0868b3538beba1e6c9f40ecf05275e5feb20f83a6c8f0ded77f"} Jan 29 16:01:12 crc kubenswrapper[4757]: I0129 16:01:12.256956 4757 scope.go:117] "RemoveContainer" containerID="ac4af8839c39b14151340df99a388780529a213813225a2e7eb52b54642ea158" Jan 29 16:01:12 crc kubenswrapper[4757]: I0129 16:01:12.257704 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7l4x8" Jan 29 16:01:12 crc kubenswrapper[4757]: I0129 16:01:12.281100 4757 scope.go:117] "RemoveContainer" containerID="0af7645db62c75bff9a9e49f625e9dc17841b3b5c82c69f8db35518f1f5adbc7" Jan 29 16:01:12 crc kubenswrapper[4757]: I0129 16:01:12.302972 4757 scope.go:117] "RemoveContainer" containerID="92b16aa715f8959e9d75a981133a7d1a3576ce9d5b990a9744e4bd1cc4f0a492" Jan 29 16:01:12 crc kubenswrapper[4757]: I0129 16:01:12.337550 4757 scope.go:117] "RemoveContainer" containerID="ac4af8839c39b14151340df99a388780529a213813225a2e7eb52b54642ea158" Jan 29 16:01:12 crc kubenswrapper[4757]: E0129 16:01:12.339136 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac4af8839c39b14151340df99a388780529a213813225a2e7eb52b54642ea158\": container with ID starting with ac4af8839c39b14151340df99a388780529a213813225a2e7eb52b54642ea158 not found: ID does not exist" containerID="ac4af8839c39b14151340df99a388780529a213813225a2e7eb52b54642ea158" Jan 29 16:01:12 crc kubenswrapper[4757]: I0129 16:01:12.339314 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac4af8839c39b14151340df99a388780529a213813225a2e7eb52b54642ea158"} err="failed to get container status \"ac4af8839c39b14151340df99a388780529a213813225a2e7eb52b54642ea158\": rpc error: code = NotFound desc = could not find container \"ac4af8839c39b14151340df99a388780529a213813225a2e7eb52b54642ea158\": container with ID starting with ac4af8839c39b14151340df99a388780529a213813225a2e7eb52b54642ea158 not found: ID does not exist" Jan 29 16:01:12 crc kubenswrapper[4757]: I0129 16:01:12.339426 4757 scope.go:117] "RemoveContainer" containerID="0af7645db62c75bff9a9e49f625e9dc17841b3b5c82c69f8db35518f1f5adbc7" Jan 29 16:01:12 crc kubenswrapper[4757]: E0129 16:01:12.339940 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0af7645db62c75bff9a9e49f625e9dc17841b3b5c82c69f8db35518f1f5adbc7\": container with ID starting with 0af7645db62c75bff9a9e49f625e9dc17841b3b5c82c69f8db35518f1f5adbc7 not found: ID does not exist" containerID="0af7645db62c75bff9a9e49f625e9dc17841b3b5c82c69f8db35518f1f5adbc7" Jan 29 16:01:12 crc kubenswrapper[4757]: I0129 16:01:12.340017 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0af7645db62c75bff9a9e49f625e9dc17841b3b5c82c69f8db35518f1f5adbc7"} err="failed to get container status \"0af7645db62c75bff9a9e49f625e9dc17841b3b5c82c69f8db35518f1f5adbc7\": rpc error: code = NotFound desc = could not find container \"0af7645db62c75bff9a9e49f625e9dc17841b3b5c82c69f8db35518f1f5adbc7\": container with ID starting with 0af7645db62c75bff9a9e49f625e9dc17841b3b5c82c69f8db35518f1f5adbc7 not found: ID does not exist" Jan 29 16:01:12 crc kubenswrapper[4757]: I0129 16:01:12.340085 4757 scope.go:117] "RemoveContainer" containerID="92b16aa715f8959e9d75a981133a7d1a3576ce9d5b990a9744e4bd1cc4f0a492" Jan 29 16:01:12 crc kubenswrapper[4757]: E0129 16:01:12.340601 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b16aa715f8959e9d75a981133a7d1a3576ce9d5b990a9744e4bd1cc4f0a492\": container with ID starting with 92b16aa715f8959e9d75a981133a7d1a3576ce9d5b990a9744e4bd1cc4f0a492 not found: ID does not exist" containerID="92b16aa715f8959e9d75a981133a7d1a3576ce9d5b990a9744e4bd1cc4f0a492" Jan 29 16:01:12 crc kubenswrapper[4757]: I0129 16:01:12.340656 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b16aa715f8959e9d75a981133a7d1a3576ce9d5b990a9744e4bd1cc4f0a492"} err="failed to get container status \"92b16aa715f8959e9d75a981133a7d1a3576ce9d5b990a9744e4bd1cc4f0a492\": rpc error: code = NotFound desc = could not find container \"92b16aa715f8959e9d75a981133a7d1a3576ce9d5b990a9744e4bd1cc4f0a492\": container with ID starting with 92b16aa715f8959e9d75a981133a7d1a3576ce9d5b990a9744e4bd1cc4f0a492 not found: ID does not exist" Jan 29 16:01:13 crc kubenswrapper[4757]: I0129 16:01:13.702344 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" (UID: "7fdc5e7a-23a0-40c6-b5eb-0655bf54320a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:13 crc kubenswrapper[4757]: I0129 16:01:13.749052 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:13 crc kubenswrapper[4757]: I0129 16:01:13.791800 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7l4x8"] Jan 29 16:01:13 crc kubenswrapper[4757]: I0129 16:01:13.798881 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7l4x8"] Jan 29 16:01:14 crc kubenswrapper[4757]: I0129 16:01:14.396740 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:01:14 crc kubenswrapper[4757]: E0129 16:01:14.397415 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:01:15 crc kubenswrapper[4757]: I0129 16:01:15.404696 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" path="/var/lib/kubelet/pods/7fdc5e7a-23a0-40c6-b5eb-0655bf54320a/volumes" Jan 29 16:01:16 crc kubenswrapper[4757]: I0129 16:01:16.855848 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 16:01:16 crc kubenswrapper[4757]: I0129 16:01:16.855889 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 16:01:16 crc kubenswrapper[4757]: I0129 16:01:16.894327 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 16:01:17 crc kubenswrapper[4757]: I0129 16:01:17.343684 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 16:01:18 crc kubenswrapper[4757]: I0129 16:01:18.158381 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zwsd"] Jan 29 16:01:19 crc kubenswrapper[4757]: I0129 16:01:19.311797 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2zwsd" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" containerName="registry-server" containerID="cri-o://def08b26ec0573a10640430bfce45a583e9e06f8331a9da856435d0c9e6f7740" gracePeriod=2 Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.181923 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.238934 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-catalog-content\") pod \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\" (UID: \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\") " Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.239022 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpmwk\" (UniqueName: \"kubernetes.io/projected/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-kube-api-access-tpmwk\") pod \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\" (UID: \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\") " Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.239144 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-utilities\") pod \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\" (UID: \"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0\") " Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.240339 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-utilities" (OuterVolumeSpecName: "utilities") pod "3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" (UID: "3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.244366 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-kube-api-access-tpmwk" (OuterVolumeSpecName: "kube-api-access-tpmwk") pod "3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" (UID: "3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0"). InnerVolumeSpecName "kube-api-access-tpmwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.261843 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" (UID: "3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.319763 4757 generic.go:334] "Generic (PLEG): container finished" podID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" containerID="def08b26ec0573a10640430bfce45a583e9e06f8331a9da856435d0c9e6f7740" exitCode=0 Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.319850 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zwsd" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.319872 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zwsd" event={"ID":"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0","Type":"ContainerDied","Data":"def08b26ec0573a10640430bfce45a583e9e06f8331a9da856435d0c9e6f7740"} Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.320491 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zwsd" event={"ID":"3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0","Type":"ContainerDied","Data":"7d0c244e740d08cadb9e4bfc6754851d13fc66a45e6cd7eb1cabfb2c784391b9"} Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.320517 4757 scope.go:117] "RemoveContainer" containerID="def08b26ec0573a10640430bfce45a583e9e06f8331a9da856435d0c9e6f7740" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.337472 4757 scope.go:117] "RemoveContainer" containerID="8ee243d8123e772a8246c2e5614707e98f61a6336732035673542524225aed3d" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.340859 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpmwk\" (UniqueName: \"kubernetes.io/projected/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-kube-api-access-tpmwk\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.340891 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.340901 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.360409 4757 scope.go:117] "RemoveContainer" containerID="18b3dd3d24005451a7687855e077ff6bed19a6f7e3db00762ebbcd0d6a675803" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.362125 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zwsd"] Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.367789 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zwsd"] Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.379551 4757 scope.go:117] "RemoveContainer" containerID="def08b26ec0573a10640430bfce45a583e9e06f8331a9da856435d0c9e6f7740" Jan 29 16:01:20 crc kubenswrapper[4757]: E0129 16:01:20.380246 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"def08b26ec0573a10640430bfce45a583e9e06f8331a9da856435d0c9e6f7740\": container with ID starting with def08b26ec0573a10640430bfce45a583e9e06f8331a9da856435d0c9e6f7740 not found: ID does not exist" containerID="def08b26ec0573a10640430bfce45a583e9e06f8331a9da856435d0c9e6f7740" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.380403 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"def08b26ec0573a10640430bfce45a583e9e06f8331a9da856435d0c9e6f7740"} err="failed to get container status \"def08b26ec0573a10640430bfce45a583e9e06f8331a9da856435d0c9e6f7740\": rpc error: code = NotFound desc = could not find container \"def08b26ec0573a10640430bfce45a583e9e06f8331a9da856435d0c9e6f7740\": container with ID starting with def08b26ec0573a10640430bfce45a583e9e06f8331a9da856435d0c9e6f7740 not found: ID does not exist" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.380550 4757 scope.go:117] "RemoveContainer" containerID="8ee243d8123e772a8246c2e5614707e98f61a6336732035673542524225aed3d" Jan 29 16:01:20 crc kubenswrapper[4757]: E0129 16:01:20.380973 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee243d8123e772a8246c2e5614707e98f61a6336732035673542524225aed3d\": container with ID starting with 8ee243d8123e772a8246c2e5614707e98f61a6336732035673542524225aed3d not found: ID does not exist" containerID="8ee243d8123e772a8246c2e5614707e98f61a6336732035673542524225aed3d" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.381033 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee243d8123e772a8246c2e5614707e98f61a6336732035673542524225aed3d"} err="failed to get container status \"8ee243d8123e772a8246c2e5614707e98f61a6336732035673542524225aed3d\": rpc error: code = NotFound desc = could not find container \"8ee243d8123e772a8246c2e5614707e98f61a6336732035673542524225aed3d\": container with ID starting with 8ee243d8123e772a8246c2e5614707e98f61a6336732035673542524225aed3d not found: ID does not exist" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.381057 4757 scope.go:117] "RemoveContainer" containerID="18b3dd3d24005451a7687855e077ff6bed19a6f7e3db00762ebbcd0d6a675803" Jan 29 16:01:20 crc kubenswrapper[4757]: E0129 16:01:20.381331 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18b3dd3d24005451a7687855e077ff6bed19a6f7e3db00762ebbcd0d6a675803\": container with ID starting with 18b3dd3d24005451a7687855e077ff6bed19a6f7e3db00762ebbcd0d6a675803 not found: ID does not exist" containerID="18b3dd3d24005451a7687855e077ff6bed19a6f7e3db00762ebbcd0d6a675803" Jan 29 16:01:20 crc kubenswrapper[4757]: I0129 16:01:20.381386 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18b3dd3d24005451a7687855e077ff6bed19a6f7e3db00762ebbcd0d6a675803"} err="failed to get container status \"18b3dd3d24005451a7687855e077ff6bed19a6f7e3db00762ebbcd0d6a675803\": rpc error: code = NotFound desc = could not find container \"18b3dd3d24005451a7687855e077ff6bed19a6f7e3db00762ebbcd0d6a675803\": container with ID starting with 18b3dd3d24005451a7687855e077ff6bed19a6f7e3db00762ebbcd0d6a675803 not found: ID does not exist" Jan 29 16:01:21 crc kubenswrapper[4757]: I0129 16:01:21.404804 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" path="/var/lib/kubelet/pods/3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0/volumes" Jan 29 16:01:25 crc kubenswrapper[4757]: I0129 16:01:25.396608 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:01:25 crc kubenswrapper[4757]: E0129 16:01:25.397165 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:01:38 crc kubenswrapper[4757]: I0129 16:01:38.396716 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:01:38 crc kubenswrapper[4757]: E0129 16:01:38.397590 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:01:52 crc kubenswrapper[4757]: I0129 16:01:52.395907 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:01:52 crc kubenswrapper[4757]: E0129 16:01:52.396705 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:02:04 crc kubenswrapper[4757]: I0129 16:02:04.396733 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:02:04 crc kubenswrapper[4757]: E0129 16:02:04.397542 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:02:19 crc kubenswrapper[4757]: I0129 16:02:19.396181 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:02:19 crc kubenswrapper[4757]: E0129 16:02:19.397335 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:02:32 crc kubenswrapper[4757]: I0129 16:02:32.398734 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:02:32 crc kubenswrapper[4757]: E0129 16:02:32.399650 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:02:46 crc kubenswrapper[4757]: I0129 16:02:46.395635 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:02:46 crc kubenswrapper[4757]: E0129 16:02:46.396561 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:02:59 crc kubenswrapper[4757]: I0129 16:02:59.396472 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:02:59 crc kubenswrapper[4757]: E0129 16:02:59.397347 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:03:12 crc kubenswrapper[4757]: I0129 16:03:12.396651 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:03:12 crc kubenswrapper[4757]: E0129 16:03:12.397372 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:03:26 crc kubenswrapper[4757]: I0129 16:03:26.921631 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t6whz"] Jan 29 16:03:26 crc kubenswrapper[4757]: E0129 16:03:26.925347 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" containerName="registry-server" Jan 29 16:03:26 crc kubenswrapper[4757]: I0129 16:03:26.925381 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" containerName="registry-server" Jan 29 16:03:26 crc kubenswrapper[4757]: E0129 16:03:26.925398 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" containerName="registry-server" Jan 29 16:03:26 crc kubenswrapper[4757]: I0129 16:03:26.925405 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" containerName="registry-server" Jan 29 16:03:26 crc kubenswrapper[4757]: E0129 16:03:26.925424 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" containerName="extract-utilities" Jan 29 16:03:26 crc kubenswrapper[4757]: I0129 16:03:26.925431 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" containerName="extract-utilities" Jan 29 16:03:26 crc kubenswrapper[4757]: E0129 16:03:26.925439 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" containerName="extract-utilities" Jan 29 16:03:26 crc kubenswrapper[4757]: I0129 16:03:26.925447 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" containerName="extract-utilities" Jan 29 16:03:26 crc kubenswrapper[4757]: E0129 16:03:26.925454 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" containerName="extract-content" Jan 29 16:03:26 crc kubenswrapper[4757]: I0129 16:03:26.925461 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" containerName="extract-content" Jan 29 16:03:26 crc kubenswrapper[4757]: E0129 16:03:26.925473 4757 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" containerName="extract-content" Jan 29 16:03:26 crc kubenswrapper[4757]: I0129 16:03:26.925489 4757 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" containerName="extract-content" Jan 29 16:03:26 crc kubenswrapper[4757]: I0129 16:03:26.925636 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a7b2a58-fdcc-4061-8da8-dc8444c1b2a0" containerName="registry-server" Jan 29 16:03:26 crc kubenswrapper[4757]: I0129 16:03:26.925649 4757 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fdc5e7a-23a0-40c6-b5eb-0655bf54320a" containerName="registry-server" Jan 29 16:03:26 crc kubenswrapper[4757]: I0129 16:03:26.926835 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:03:26 crc kubenswrapper[4757]: I0129 16:03:26.930899 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6whz"] Jan 29 16:03:27 crc kubenswrapper[4757]: I0129 16:03:27.056158 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297c1c60-a0c5-4628-af4b-86a321a9d55c-utilities\") pod \"redhat-operators-t6whz\" (UID: \"297c1c60-a0c5-4628-af4b-86a321a9d55c\") " pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:03:27 crc kubenswrapper[4757]: I0129 16:03:27.056229 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brjbr\" (UniqueName: \"kubernetes.io/projected/297c1c60-a0c5-4628-af4b-86a321a9d55c-kube-api-access-brjbr\") pod \"redhat-operators-t6whz\" (UID: \"297c1c60-a0c5-4628-af4b-86a321a9d55c\") " pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:03:27 crc kubenswrapper[4757]: I0129 16:03:27.056492 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297c1c60-a0c5-4628-af4b-86a321a9d55c-catalog-content\") pod \"redhat-operators-t6whz\" (UID: \"297c1c60-a0c5-4628-af4b-86a321a9d55c\") " pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:03:27 crc kubenswrapper[4757]: I0129 16:03:27.158298 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brjbr\" (UniqueName: \"kubernetes.io/projected/297c1c60-a0c5-4628-af4b-86a321a9d55c-kube-api-access-brjbr\") pod \"redhat-operators-t6whz\" (UID: \"297c1c60-a0c5-4628-af4b-86a321a9d55c\") " pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:03:27 crc kubenswrapper[4757]: I0129 16:03:27.158438 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297c1c60-a0c5-4628-af4b-86a321a9d55c-catalog-content\") pod \"redhat-operators-t6whz\" (UID: \"297c1c60-a0c5-4628-af4b-86a321a9d55c\") " pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:03:27 crc kubenswrapper[4757]: I0129 16:03:27.158475 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297c1c60-a0c5-4628-af4b-86a321a9d55c-utilities\") pod \"redhat-operators-t6whz\" (UID: \"297c1c60-a0c5-4628-af4b-86a321a9d55c\") " pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:03:27 crc kubenswrapper[4757]: I0129 16:03:27.158995 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297c1c60-a0c5-4628-af4b-86a321a9d55c-utilities\") pod \"redhat-operators-t6whz\" (UID: \"297c1c60-a0c5-4628-af4b-86a321a9d55c\") " pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:03:27 crc kubenswrapper[4757]: I0129 16:03:27.159101 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297c1c60-a0c5-4628-af4b-86a321a9d55c-catalog-content\") pod \"redhat-operators-t6whz\" (UID: \"297c1c60-a0c5-4628-af4b-86a321a9d55c\") " pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:03:27 crc kubenswrapper[4757]: I0129 16:03:27.198255 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brjbr\" (UniqueName: \"kubernetes.io/projected/297c1c60-a0c5-4628-af4b-86a321a9d55c-kube-api-access-brjbr\") pod \"redhat-operators-t6whz\" (UID: \"297c1c60-a0c5-4628-af4b-86a321a9d55c\") " pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:03:27 crc kubenswrapper[4757]: I0129 16:03:27.245545 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:03:27 crc kubenswrapper[4757]: I0129 16:03:27.403216 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:03:27 crc kubenswrapper[4757]: E0129 16:03:27.403923 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:03:27 crc kubenswrapper[4757]: I0129 16:03:27.764292 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6whz"] Jan 29 16:03:27 crc kubenswrapper[4757]: I0129 16:03:27.834278 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6whz" event={"ID":"297c1c60-a0c5-4628-af4b-86a321a9d55c","Type":"ContainerStarted","Data":"d115a39b6dfa9c1be038499ccb711980f0ab231a8935bc36852fde6eeb455289"} Jan 29 16:03:28 crc kubenswrapper[4757]: I0129 16:03:28.841991 4757 generic.go:334] "Generic (PLEG): container finished" podID="297c1c60-a0c5-4628-af4b-86a321a9d55c" containerID="42792576b404dab3c874049bf4fa4a0c6000c9da4debbe7995052d5e548865bd" exitCode=0 Jan 29 16:03:28 crc kubenswrapper[4757]: I0129 16:03:28.842042 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6whz" event={"ID":"297c1c60-a0c5-4628-af4b-86a321a9d55c","Type":"ContainerDied","Data":"42792576b404dab3c874049bf4fa4a0c6000c9da4debbe7995052d5e548865bd"} Jan 29 16:03:29 crc kubenswrapper[4757]: I0129 16:03:29.319097 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-stskz"] Jan 29 16:03:29 crc kubenswrapper[4757]: I0129 16:03:29.320970 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:29 crc kubenswrapper[4757]: I0129 16:03:29.342225 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-stskz"] Jan 29 16:03:29 crc kubenswrapper[4757]: I0129 16:03:29.389163 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651a15e3-767e-4e91-8083-2d409b75278d-catalog-content\") pod \"community-operators-stskz\" (UID: \"651a15e3-767e-4e91-8083-2d409b75278d\") " pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:29 crc kubenswrapper[4757]: I0129 16:03:29.389286 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49mmj\" (UniqueName: \"kubernetes.io/projected/651a15e3-767e-4e91-8083-2d409b75278d-kube-api-access-49mmj\") pod \"community-operators-stskz\" (UID: \"651a15e3-767e-4e91-8083-2d409b75278d\") " pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:29 crc kubenswrapper[4757]: I0129 16:03:29.389318 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651a15e3-767e-4e91-8083-2d409b75278d-utilities\") pod \"community-operators-stskz\" (UID: \"651a15e3-767e-4e91-8083-2d409b75278d\") " pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:29 crc kubenswrapper[4757]: I0129 16:03:29.490348 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651a15e3-767e-4e91-8083-2d409b75278d-catalog-content\") pod \"community-operators-stskz\" (UID: \"651a15e3-767e-4e91-8083-2d409b75278d\") " pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:29 crc kubenswrapper[4757]: I0129 16:03:29.490852 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651a15e3-767e-4e91-8083-2d409b75278d-catalog-content\") pod \"community-operators-stskz\" (UID: \"651a15e3-767e-4e91-8083-2d409b75278d\") " pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:29 crc kubenswrapper[4757]: I0129 16:03:29.491054 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49mmj\" (UniqueName: \"kubernetes.io/projected/651a15e3-767e-4e91-8083-2d409b75278d-kube-api-access-49mmj\") pod \"community-operators-stskz\" (UID: \"651a15e3-767e-4e91-8083-2d409b75278d\") " pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:29 crc kubenswrapper[4757]: I0129 16:03:29.491084 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651a15e3-767e-4e91-8083-2d409b75278d-utilities\") pod \"community-operators-stskz\" (UID: \"651a15e3-767e-4e91-8083-2d409b75278d\") " pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:29 crc kubenswrapper[4757]: I0129 16:03:29.491386 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651a15e3-767e-4e91-8083-2d409b75278d-utilities\") pod \"community-operators-stskz\" (UID: \"651a15e3-767e-4e91-8083-2d409b75278d\") " pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:29 crc kubenswrapper[4757]: I0129 16:03:29.512491 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49mmj\" (UniqueName: \"kubernetes.io/projected/651a15e3-767e-4e91-8083-2d409b75278d-kube-api-access-49mmj\") pod \"community-operators-stskz\" (UID: \"651a15e3-767e-4e91-8083-2d409b75278d\") " pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:29 crc kubenswrapper[4757]: I0129 16:03:29.643517 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:30 crc kubenswrapper[4757]: I0129 16:03:30.153916 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-stskz"] Jan 29 16:03:30 crc kubenswrapper[4757]: W0129 16:03:30.161655 4757 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod651a15e3_767e_4e91_8083_2d409b75278d.slice/crio-78a2dda291eb757402db41b85d05deb0decc5ee9be098424c2dec0908114d29f WatchSource:0}: Error finding container 78a2dda291eb757402db41b85d05deb0decc5ee9be098424c2dec0908114d29f: Status 404 returned error can't find the container with id 78a2dda291eb757402db41b85d05deb0decc5ee9be098424c2dec0908114d29f Jan 29 16:03:30 crc kubenswrapper[4757]: I0129 16:03:30.857089 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6whz" event={"ID":"297c1c60-a0c5-4628-af4b-86a321a9d55c","Type":"ContainerStarted","Data":"21f7de58ec7c6a661ce153c024e248510fc04d1831ca2541f20afd42cff37f7a"} Jan 29 16:03:30 crc kubenswrapper[4757]: I0129 16:03:30.858928 4757 generic.go:334] "Generic (PLEG): container finished" podID="651a15e3-767e-4e91-8083-2d409b75278d" containerID="74b7bcc2c627bd29c1ba893049af22fb65499df874dee9c334578bca6a6c778d" exitCode=0 Jan 29 16:03:30 crc kubenswrapper[4757]: I0129 16:03:30.858975 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stskz" event={"ID":"651a15e3-767e-4e91-8083-2d409b75278d","Type":"ContainerDied","Data":"74b7bcc2c627bd29c1ba893049af22fb65499df874dee9c334578bca6a6c778d"} Jan 29 16:03:30 crc kubenswrapper[4757]: I0129 16:03:30.859039 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stskz" event={"ID":"651a15e3-767e-4e91-8083-2d409b75278d","Type":"ContainerStarted","Data":"78a2dda291eb757402db41b85d05deb0decc5ee9be098424c2dec0908114d29f"} Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.416769 4757 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-g497m/must-gather-tq6qx"] Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.418066 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g497m/must-gather-tq6qx" Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.429847 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-g497m"/"openshift-service-ca.crt" Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.431869 4757 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-g497m"/"default-dockercfg-72ccn" Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.431882 4757 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-g497m"/"kube-root-ca.crt" Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.525360 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vwlc\" (UniqueName: \"kubernetes.io/projected/48f6b0d3-3584-4888-a995-05cd919020b5-kube-api-access-7vwlc\") pod \"must-gather-tq6qx\" (UID: \"48f6b0d3-3584-4888-a995-05cd919020b5\") " pod="openshift-must-gather-g497m/must-gather-tq6qx" Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.533509 4757 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/48f6b0d3-3584-4888-a995-05cd919020b5-must-gather-output\") pod \"must-gather-tq6qx\" (UID: \"48f6b0d3-3584-4888-a995-05cd919020b5\") " pod="openshift-must-gather-g497m/must-gather-tq6qx" Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.550786 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g497m/must-gather-tq6qx"] Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.635572 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/48f6b0d3-3584-4888-a995-05cd919020b5-must-gather-output\") pod \"must-gather-tq6qx\" (UID: \"48f6b0d3-3584-4888-a995-05cd919020b5\") " pod="openshift-must-gather-g497m/must-gather-tq6qx" Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.635707 4757 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vwlc\" (UniqueName: \"kubernetes.io/projected/48f6b0d3-3584-4888-a995-05cd919020b5-kube-api-access-7vwlc\") pod \"must-gather-tq6qx\" (UID: \"48f6b0d3-3584-4888-a995-05cd919020b5\") " pod="openshift-must-gather-g497m/must-gather-tq6qx" Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.636448 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/48f6b0d3-3584-4888-a995-05cd919020b5-must-gather-output\") pod \"must-gather-tq6qx\" (UID: \"48f6b0d3-3584-4888-a995-05cd919020b5\") " pod="openshift-must-gather-g497m/must-gather-tq6qx" Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.660091 4757 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vwlc\" (UniqueName: \"kubernetes.io/projected/48f6b0d3-3584-4888-a995-05cd919020b5-kube-api-access-7vwlc\") pod \"must-gather-tq6qx\" (UID: \"48f6b0d3-3584-4888-a995-05cd919020b5\") " pod="openshift-must-gather-g497m/must-gather-tq6qx" Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.732960 4757 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g497m/must-gather-tq6qx" Jan 29 16:03:31 crc kubenswrapper[4757]: I0129 16:03:31.873382 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stskz" event={"ID":"651a15e3-767e-4e91-8083-2d409b75278d","Type":"ContainerStarted","Data":"a521392f3f334b09cd0eba74c5e86acf01b16132e30208cce6e475cbb616b9be"} Jan 29 16:03:32 crc kubenswrapper[4757]: I0129 16:03:32.250758 4757 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g497m/must-gather-tq6qx"] Jan 29 16:03:32 crc kubenswrapper[4757]: I0129 16:03:32.881726 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g497m/must-gather-tq6qx" event={"ID":"48f6b0d3-3584-4888-a995-05cd919020b5","Type":"ContainerStarted","Data":"7972c2d83174305036123edca045b16f7dbf2ac764c985450f232aec8eb01fb3"} Jan 29 16:03:32 crc kubenswrapper[4757]: I0129 16:03:32.885479 4757 generic.go:334] "Generic (PLEG): container finished" podID="651a15e3-767e-4e91-8083-2d409b75278d" containerID="a521392f3f334b09cd0eba74c5e86acf01b16132e30208cce6e475cbb616b9be" exitCode=0 Jan 29 16:03:32 crc kubenswrapper[4757]: I0129 16:03:32.885522 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stskz" event={"ID":"651a15e3-767e-4e91-8083-2d409b75278d","Type":"ContainerDied","Data":"a521392f3f334b09cd0eba74c5e86acf01b16132e30208cce6e475cbb616b9be"} Jan 29 16:03:33 crc kubenswrapper[4757]: I0129 16:03:33.897943 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stskz" event={"ID":"651a15e3-767e-4e91-8083-2d409b75278d","Type":"ContainerStarted","Data":"d455d83df45a03fd206324995de3231f81578fbb378255076eb3637989332d1a"} Jan 29 16:03:33 crc kubenswrapper[4757]: I0129 16:03:33.930195 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-stskz" podStartSLOduration=2.440018597 podStartE2EDuration="4.930173691s" podCreationTimestamp="2026-01-29 16:03:29 +0000 UTC" firstStartedPulling="2026-01-29 16:03:30.860560306 +0000 UTC m=+3174.149810553" lastFinishedPulling="2026-01-29 16:03:33.35071541 +0000 UTC m=+3176.639965647" observedRunningTime="2026-01-29 16:03:33.923717598 +0000 UTC m=+3177.212967875" watchObservedRunningTime="2026-01-29 16:03:33.930173691 +0000 UTC m=+3177.219423938" Jan 29 16:03:34 crc kubenswrapper[4757]: I0129 16:03:34.916032 4757 generic.go:334] "Generic (PLEG): container finished" podID="297c1c60-a0c5-4628-af4b-86a321a9d55c" containerID="21f7de58ec7c6a661ce153c024e248510fc04d1831ca2541f20afd42cff37f7a" exitCode=0 Jan 29 16:03:34 crc kubenswrapper[4757]: I0129 16:03:34.916092 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6whz" event={"ID":"297c1c60-a0c5-4628-af4b-86a321a9d55c","Type":"ContainerDied","Data":"21f7de58ec7c6a661ce153c024e248510fc04d1831ca2541f20afd42cff37f7a"} Jan 29 16:03:39 crc kubenswrapper[4757]: I0129 16:03:39.644815 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:39 crc kubenswrapper[4757]: I0129 16:03:39.645447 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:39 crc kubenswrapper[4757]: I0129 16:03:39.690231 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:40 crc kubenswrapper[4757]: I0129 16:03:40.005532 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:40 crc kubenswrapper[4757]: I0129 16:03:40.073507 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-stskz"] Jan 29 16:03:40 crc kubenswrapper[4757]: I0129 16:03:40.960325 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g497m/must-gather-tq6qx" event={"ID":"48f6b0d3-3584-4888-a995-05cd919020b5","Type":"ContainerStarted","Data":"38a4d990d224d2b754227406b253b9175fd6ec2d8acfab524ce0e5fc1254ba1b"} Jan 29 16:03:40 crc kubenswrapper[4757]: I0129 16:03:40.960637 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g497m/must-gather-tq6qx" event={"ID":"48f6b0d3-3584-4888-a995-05cd919020b5","Type":"ContainerStarted","Data":"c3ef2760ef914cad827c349033de933b671679fd6264994970eb75776640f2fd"} Jan 29 16:03:40 crc kubenswrapper[4757]: I0129 16:03:40.963009 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6whz" event={"ID":"297c1c60-a0c5-4628-af4b-86a321a9d55c","Type":"ContainerStarted","Data":"71cbad54dd1084def645e193ecaee4407806220b62fb1f37108bebabb36ded61"} Jan 29 16:03:40 crc kubenswrapper[4757]: I0129 16:03:40.983599 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-g497m/must-gather-tq6qx" podStartSLOduration=2.413541407 podStartE2EDuration="9.983576703s" podCreationTimestamp="2026-01-29 16:03:31 +0000 UTC" firstStartedPulling="2026-01-29 16:03:32.288667063 +0000 UTC m=+3175.577917300" lastFinishedPulling="2026-01-29 16:03:39.858702359 +0000 UTC m=+3183.147952596" observedRunningTime="2026-01-29 16:03:40.979076425 +0000 UTC m=+3184.268326672" watchObservedRunningTime="2026-01-29 16:03:40.983576703 +0000 UTC m=+3184.272826940" Jan 29 16:03:41 crc kubenswrapper[4757]: I0129 16:03:41.003376 4757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t6whz" podStartSLOduration=3.987366682 podStartE2EDuration="15.003356396s" podCreationTimestamp="2026-01-29 16:03:26 +0000 UTC" firstStartedPulling="2026-01-29 16:03:28.843717984 +0000 UTC m=+3172.132968221" lastFinishedPulling="2026-01-29 16:03:39.859707698 +0000 UTC m=+3183.148957935" observedRunningTime="2026-01-29 16:03:40.998714874 +0000 UTC m=+3184.287965121" watchObservedRunningTime="2026-01-29 16:03:41.003356396 +0000 UTC m=+3184.292606623" Jan 29 16:03:41 crc kubenswrapper[4757]: I0129 16:03:41.971466 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-stskz" podUID="651a15e3-767e-4e91-8083-2d409b75278d" containerName="registry-server" containerID="cri-o://d455d83df45a03fd206324995de3231f81578fbb378255076eb3637989332d1a" gracePeriod=2 Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.396804 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:03:42 crc kubenswrapper[4757]: E0129 16:03:42.397194 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.525512 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.713954 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651a15e3-767e-4e91-8083-2d409b75278d-catalog-content\") pod \"651a15e3-767e-4e91-8083-2d409b75278d\" (UID: \"651a15e3-767e-4e91-8083-2d409b75278d\") " Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.714032 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651a15e3-767e-4e91-8083-2d409b75278d-utilities\") pod \"651a15e3-767e-4e91-8083-2d409b75278d\" (UID: \"651a15e3-767e-4e91-8083-2d409b75278d\") " Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.714160 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49mmj\" (UniqueName: \"kubernetes.io/projected/651a15e3-767e-4e91-8083-2d409b75278d-kube-api-access-49mmj\") pod \"651a15e3-767e-4e91-8083-2d409b75278d\" (UID: \"651a15e3-767e-4e91-8083-2d409b75278d\") " Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.715425 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/651a15e3-767e-4e91-8083-2d409b75278d-utilities" (OuterVolumeSpecName: "utilities") pod "651a15e3-767e-4e91-8083-2d409b75278d" (UID: "651a15e3-767e-4e91-8083-2d409b75278d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.725012 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651a15e3-767e-4e91-8083-2d409b75278d-kube-api-access-49mmj" (OuterVolumeSpecName: "kube-api-access-49mmj") pod "651a15e3-767e-4e91-8083-2d409b75278d" (UID: "651a15e3-767e-4e91-8083-2d409b75278d"). InnerVolumeSpecName "kube-api-access-49mmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.772779 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/651a15e3-767e-4e91-8083-2d409b75278d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "651a15e3-767e-4e91-8083-2d409b75278d" (UID: "651a15e3-767e-4e91-8083-2d409b75278d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.815455 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49mmj\" (UniqueName: \"kubernetes.io/projected/651a15e3-767e-4e91-8083-2d409b75278d-kube-api-access-49mmj\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.815492 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/651a15e3-767e-4e91-8083-2d409b75278d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.815504 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/651a15e3-767e-4e91-8083-2d409b75278d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.980414 4757 generic.go:334] "Generic (PLEG): container finished" podID="651a15e3-767e-4e91-8083-2d409b75278d" containerID="d455d83df45a03fd206324995de3231f81578fbb378255076eb3637989332d1a" exitCode=0 Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.980470 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stskz" event={"ID":"651a15e3-767e-4e91-8083-2d409b75278d","Type":"ContainerDied","Data":"d455d83df45a03fd206324995de3231f81578fbb378255076eb3637989332d1a"} Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.980490 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-stskz" Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.980508 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stskz" event={"ID":"651a15e3-767e-4e91-8083-2d409b75278d","Type":"ContainerDied","Data":"78a2dda291eb757402db41b85d05deb0decc5ee9be098424c2dec0908114d29f"} Jan 29 16:03:42 crc kubenswrapper[4757]: I0129 16:03:42.980530 4757 scope.go:117] "RemoveContainer" containerID="d455d83df45a03fd206324995de3231f81578fbb378255076eb3637989332d1a" Jan 29 16:03:43 crc kubenswrapper[4757]: I0129 16:03:43.002482 4757 scope.go:117] "RemoveContainer" containerID="a521392f3f334b09cd0eba74c5e86acf01b16132e30208cce6e475cbb616b9be" Jan 29 16:03:43 crc kubenswrapper[4757]: I0129 16:03:43.031354 4757 scope.go:117] "RemoveContainer" containerID="74b7bcc2c627bd29c1ba893049af22fb65499df874dee9c334578bca6a6c778d" Jan 29 16:03:43 crc kubenswrapper[4757]: I0129 16:03:43.033257 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-stskz"] Jan 29 16:03:43 crc kubenswrapper[4757]: I0129 16:03:43.041292 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-stskz"] Jan 29 16:03:43 crc kubenswrapper[4757]: I0129 16:03:43.052839 4757 scope.go:117] "RemoveContainer" containerID="d455d83df45a03fd206324995de3231f81578fbb378255076eb3637989332d1a" Jan 29 16:03:43 crc kubenswrapper[4757]: E0129 16:03:43.053410 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d455d83df45a03fd206324995de3231f81578fbb378255076eb3637989332d1a\": container with ID starting with d455d83df45a03fd206324995de3231f81578fbb378255076eb3637989332d1a not found: ID does not exist" containerID="d455d83df45a03fd206324995de3231f81578fbb378255076eb3637989332d1a" Jan 29 16:03:43 crc kubenswrapper[4757]: I0129 16:03:43.053566 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d455d83df45a03fd206324995de3231f81578fbb378255076eb3637989332d1a"} err="failed to get container status \"d455d83df45a03fd206324995de3231f81578fbb378255076eb3637989332d1a\": rpc error: code = NotFound desc = could not find container \"d455d83df45a03fd206324995de3231f81578fbb378255076eb3637989332d1a\": container with ID starting with d455d83df45a03fd206324995de3231f81578fbb378255076eb3637989332d1a not found: ID does not exist" Jan 29 16:03:43 crc kubenswrapper[4757]: I0129 16:03:43.053669 4757 scope.go:117] "RemoveContainer" containerID="a521392f3f334b09cd0eba74c5e86acf01b16132e30208cce6e475cbb616b9be" Jan 29 16:03:43 crc kubenswrapper[4757]: E0129 16:03:43.054095 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a521392f3f334b09cd0eba74c5e86acf01b16132e30208cce6e475cbb616b9be\": container with ID starting with a521392f3f334b09cd0eba74c5e86acf01b16132e30208cce6e475cbb616b9be not found: ID does not exist" containerID="a521392f3f334b09cd0eba74c5e86acf01b16132e30208cce6e475cbb616b9be" Jan 29 16:03:43 crc kubenswrapper[4757]: I0129 16:03:43.054207 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a521392f3f334b09cd0eba74c5e86acf01b16132e30208cce6e475cbb616b9be"} err="failed to get container status \"a521392f3f334b09cd0eba74c5e86acf01b16132e30208cce6e475cbb616b9be\": rpc error: code = NotFound desc = could not find container \"a521392f3f334b09cd0eba74c5e86acf01b16132e30208cce6e475cbb616b9be\": container with ID starting with a521392f3f334b09cd0eba74c5e86acf01b16132e30208cce6e475cbb616b9be not found: ID does not exist" Jan 29 16:03:43 crc kubenswrapper[4757]: I0129 16:03:43.054314 4757 scope.go:117] "RemoveContainer" containerID="74b7bcc2c627bd29c1ba893049af22fb65499df874dee9c334578bca6a6c778d" Jan 29 16:03:43 crc kubenswrapper[4757]: E0129 16:03:43.054688 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74b7bcc2c627bd29c1ba893049af22fb65499df874dee9c334578bca6a6c778d\": container with ID starting with 74b7bcc2c627bd29c1ba893049af22fb65499df874dee9c334578bca6a6c778d not found: ID does not exist" containerID="74b7bcc2c627bd29c1ba893049af22fb65499df874dee9c334578bca6a6c778d" Jan 29 16:03:43 crc kubenswrapper[4757]: I0129 16:03:43.054788 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74b7bcc2c627bd29c1ba893049af22fb65499df874dee9c334578bca6a6c778d"} err="failed to get container status \"74b7bcc2c627bd29c1ba893049af22fb65499df874dee9c334578bca6a6c778d\": rpc error: code = NotFound desc = could not find container \"74b7bcc2c627bd29c1ba893049af22fb65499df874dee9c334578bca6a6c778d\": container with ID starting with 74b7bcc2c627bd29c1ba893049af22fb65499df874dee9c334578bca6a6c778d not found: ID does not exist" Jan 29 16:03:43 crc kubenswrapper[4757]: I0129 16:03:43.406000 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651a15e3-767e-4e91-8083-2d409b75278d" path="/var/lib/kubelet/pods/651a15e3-767e-4e91-8083-2d409b75278d/volumes" Jan 29 16:03:47 crc kubenswrapper[4757]: I0129 16:03:47.246451 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:03:47 crc kubenswrapper[4757]: I0129 16:03:47.246819 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:03:48 crc kubenswrapper[4757]: I0129 16:03:48.285596 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t6whz" podUID="297c1c60-a0c5-4628-af4b-86a321a9d55c" containerName="registry-server" probeResult="failure" output=< Jan 29 16:03:48 crc kubenswrapper[4757]: timeout: failed to connect service ":50051" within 1s Jan 29 16:03:48 crc kubenswrapper[4757]: > Jan 29 16:03:54 crc kubenswrapper[4757]: I0129 16:03:54.396645 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:03:54 crc kubenswrapper[4757]: E0129 16:03:54.397247 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:03:58 crc kubenswrapper[4757]: I0129 16:03:58.289231 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t6whz" podUID="297c1c60-a0c5-4628-af4b-86a321a9d55c" containerName="registry-server" probeResult="failure" output=< Jan 29 16:03:58 crc kubenswrapper[4757]: timeout: failed to connect service ":50051" within 1s Jan 29 16:03:58 crc kubenswrapper[4757]: > Jan 29 16:04:05 crc kubenswrapper[4757]: I0129 16:04:05.396438 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:04:05 crc kubenswrapper[4757]: E0129 16:04:05.397108 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:04:08 crc kubenswrapper[4757]: I0129 16:04:08.289221 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t6whz" podUID="297c1c60-a0c5-4628-af4b-86a321a9d55c" containerName="registry-server" probeResult="failure" output=< Jan 29 16:04:08 crc kubenswrapper[4757]: timeout: failed to connect service ":50051" within 1s Jan 29 16:04:08 crc kubenswrapper[4757]: > Jan 29 16:04:17 crc kubenswrapper[4757]: I0129 16:04:17.400302 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:04:17 crc kubenswrapper[4757]: E0129 16:04:17.402663 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:04:18 crc kubenswrapper[4757]: I0129 16:04:18.283559 4757 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t6whz" podUID="297c1c60-a0c5-4628-af4b-86a321a9d55c" containerName="registry-server" probeResult="failure" output=< Jan 29 16:04:18 crc kubenswrapper[4757]: timeout: failed to connect service ":50051" within 1s Jan 29 16:04:18 crc kubenswrapper[4757]: > Jan 29 16:04:27 crc kubenswrapper[4757]: I0129 16:04:27.297287 4757 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:04:27 crc kubenswrapper[4757]: I0129 16:04:27.353841 4757 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:04:28 crc kubenswrapper[4757]: I0129 16:04:28.141995 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6whz"] Jan 29 16:04:29 crc kubenswrapper[4757]: I0129 16:04:29.266201 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t6whz" podUID="297c1c60-a0c5-4628-af4b-86a321a9d55c" containerName="registry-server" containerID="cri-o://71cbad54dd1084def645e193ecaee4407806220b62fb1f37108bebabb36ded61" gracePeriod=2 Jan 29 16:04:29 crc kubenswrapper[4757]: I0129 16:04:29.667728 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:04:29 crc kubenswrapper[4757]: I0129 16:04:29.681296 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297c1c60-a0c5-4628-af4b-86a321a9d55c-catalog-content\") pod \"297c1c60-a0c5-4628-af4b-86a321a9d55c\" (UID: \"297c1c60-a0c5-4628-af4b-86a321a9d55c\") " Jan 29 16:04:29 crc kubenswrapper[4757]: I0129 16:04:29.681333 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297c1c60-a0c5-4628-af4b-86a321a9d55c-utilities\") pod \"297c1c60-a0c5-4628-af4b-86a321a9d55c\" (UID: \"297c1c60-a0c5-4628-af4b-86a321a9d55c\") " Jan 29 16:04:29 crc kubenswrapper[4757]: I0129 16:04:29.681411 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brjbr\" (UniqueName: \"kubernetes.io/projected/297c1c60-a0c5-4628-af4b-86a321a9d55c-kube-api-access-brjbr\") pod \"297c1c60-a0c5-4628-af4b-86a321a9d55c\" (UID: \"297c1c60-a0c5-4628-af4b-86a321a9d55c\") " Jan 29 16:04:29 crc kubenswrapper[4757]: I0129 16:04:29.682836 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/297c1c60-a0c5-4628-af4b-86a321a9d55c-utilities" (OuterVolumeSpecName: "utilities") pod "297c1c60-a0c5-4628-af4b-86a321a9d55c" (UID: "297c1c60-a0c5-4628-af4b-86a321a9d55c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:04:29 crc kubenswrapper[4757]: I0129 16:04:29.696870 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297c1c60-a0c5-4628-af4b-86a321a9d55c-kube-api-access-brjbr" (OuterVolumeSpecName: "kube-api-access-brjbr") pod "297c1c60-a0c5-4628-af4b-86a321a9d55c" (UID: "297c1c60-a0c5-4628-af4b-86a321a9d55c"). InnerVolumeSpecName "kube-api-access-brjbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:04:29 crc kubenswrapper[4757]: I0129 16:04:29.782747 4757 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/297c1c60-a0c5-4628-af4b-86a321a9d55c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:04:29 crc kubenswrapper[4757]: I0129 16:04:29.782785 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brjbr\" (UniqueName: \"kubernetes.io/projected/297c1c60-a0c5-4628-af4b-86a321a9d55c-kube-api-access-brjbr\") on node \"crc\" DevicePath \"\"" Jan 29 16:04:29 crc kubenswrapper[4757]: I0129 16:04:29.817239 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/297c1c60-a0c5-4628-af4b-86a321a9d55c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "297c1c60-a0c5-4628-af4b-86a321a9d55c" (UID: "297c1c60-a0c5-4628-af4b-86a321a9d55c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:04:29 crc kubenswrapper[4757]: I0129 16:04:29.884466 4757 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/297c1c60-a0c5-4628-af4b-86a321a9d55c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.283676 4757 generic.go:334] "Generic (PLEG): container finished" podID="297c1c60-a0c5-4628-af4b-86a321a9d55c" containerID="71cbad54dd1084def645e193ecaee4407806220b62fb1f37108bebabb36ded61" exitCode=0 Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.283751 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6whz" event={"ID":"297c1c60-a0c5-4628-af4b-86a321a9d55c","Type":"ContainerDied","Data":"71cbad54dd1084def645e193ecaee4407806220b62fb1f37108bebabb36ded61"} Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.284050 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6whz" event={"ID":"297c1c60-a0c5-4628-af4b-86a321a9d55c","Type":"ContainerDied","Data":"d115a39b6dfa9c1be038499ccb711980f0ab231a8935bc36852fde6eeb455289"} Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.284070 4757 scope.go:117] "RemoveContainer" containerID="71cbad54dd1084def645e193ecaee4407806220b62fb1f37108bebabb36ded61" Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.283773 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6whz" Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.308835 4757 scope.go:117] "RemoveContainer" containerID="21f7de58ec7c6a661ce153c024e248510fc04d1831ca2541f20afd42cff37f7a" Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.325289 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6whz"] Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.332007 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t6whz"] Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.337488 4757 scope.go:117] "RemoveContainer" containerID="42792576b404dab3c874049bf4fa4a0c6000c9da4debbe7995052d5e548865bd" Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.357241 4757 scope.go:117] "RemoveContainer" containerID="71cbad54dd1084def645e193ecaee4407806220b62fb1f37108bebabb36ded61" Jan 29 16:04:30 crc kubenswrapper[4757]: E0129 16:04:30.357786 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71cbad54dd1084def645e193ecaee4407806220b62fb1f37108bebabb36ded61\": container with ID starting with 71cbad54dd1084def645e193ecaee4407806220b62fb1f37108bebabb36ded61 not found: ID does not exist" containerID="71cbad54dd1084def645e193ecaee4407806220b62fb1f37108bebabb36ded61" Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.357822 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71cbad54dd1084def645e193ecaee4407806220b62fb1f37108bebabb36ded61"} err="failed to get container status \"71cbad54dd1084def645e193ecaee4407806220b62fb1f37108bebabb36ded61\": rpc error: code = NotFound desc = could not find container \"71cbad54dd1084def645e193ecaee4407806220b62fb1f37108bebabb36ded61\": container with ID starting with 71cbad54dd1084def645e193ecaee4407806220b62fb1f37108bebabb36ded61 not found: ID does not exist" Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.357851 4757 scope.go:117] "RemoveContainer" containerID="21f7de58ec7c6a661ce153c024e248510fc04d1831ca2541f20afd42cff37f7a" Jan 29 16:04:30 crc kubenswrapper[4757]: E0129 16:04:30.358197 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21f7de58ec7c6a661ce153c024e248510fc04d1831ca2541f20afd42cff37f7a\": container with ID starting with 21f7de58ec7c6a661ce153c024e248510fc04d1831ca2541f20afd42cff37f7a not found: ID does not exist" containerID="21f7de58ec7c6a661ce153c024e248510fc04d1831ca2541f20afd42cff37f7a" Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.358226 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21f7de58ec7c6a661ce153c024e248510fc04d1831ca2541f20afd42cff37f7a"} err="failed to get container status \"21f7de58ec7c6a661ce153c024e248510fc04d1831ca2541f20afd42cff37f7a\": rpc error: code = NotFound desc = could not find container \"21f7de58ec7c6a661ce153c024e248510fc04d1831ca2541f20afd42cff37f7a\": container with ID starting with 21f7de58ec7c6a661ce153c024e248510fc04d1831ca2541f20afd42cff37f7a not found: ID does not exist" Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.358244 4757 scope.go:117] "RemoveContainer" containerID="42792576b404dab3c874049bf4fa4a0c6000c9da4debbe7995052d5e548865bd" Jan 29 16:04:30 crc kubenswrapper[4757]: E0129 16:04:30.358919 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42792576b404dab3c874049bf4fa4a0c6000c9da4debbe7995052d5e548865bd\": container with ID starting with 42792576b404dab3c874049bf4fa4a0c6000c9da4debbe7995052d5e548865bd not found: ID does not exist" containerID="42792576b404dab3c874049bf4fa4a0c6000c9da4debbe7995052d5e548865bd" Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.358949 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42792576b404dab3c874049bf4fa4a0c6000c9da4debbe7995052d5e548865bd"} err="failed to get container status \"42792576b404dab3c874049bf4fa4a0c6000c9da4debbe7995052d5e548865bd\": rpc error: code = NotFound desc = could not find container \"42792576b404dab3c874049bf4fa4a0c6000c9da4debbe7995052d5e548865bd\": container with ID starting with 42792576b404dab3c874049bf4fa4a0c6000c9da4debbe7995052d5e548865bd not found: ID does not exist" Jan 29 16:04:30 crc kubenswrapper[4757]: I0129 16:04:30.396394 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:04:30 crc kubenswrapper[4757]: E0129 16:04:30.396774 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:04:31 crc kubenswrapper[4757]: I0129 16:04:31.415547 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="297c1c60-a0c5-4628-af4b-86a321a9d55c" path="/var/lib/kubelet/pods/297c1c60-a0c5-4628-af4b-86a321a9d55c/volumes" Jan 29 16:04:41 crc kubenswrapper[4757]: I0129 16:04:41.541629 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm_8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4/util/0.log" Jan 29 16:04:41 crc kubenswrapper[4757]: I0129 16:04:41.697761 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm_8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4/util/0.log" Jan 29 16:04:41 crc kubenswrapper[4757]: I0129 16:04:41.742658 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm_8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4/pull/0.log" Jan 29 16:04:41 crc kubenswrapper[4757]: I0129 16:04:41.747066 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm_8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4/pull/0.log" Jan 29 16:04:41 crc kubenswrapper[4757]: I0129 16:04:41.930823 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm_8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4/pull/0.log" Jan 29 16:04:41 crc kubenswrapper[4757]: I0129 16:04:41.985230 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm_8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4/util/0.log" Jan 29 16:04:42 crc kubenswrapper[4757]: I0129 16:04:42.012721 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b31e24a755373557ff5518f1fbe3f73064cf93f8a3e0a49150b8ff679452cwm_8dd0f2e3-75aa-4ec3-be24-319c1ac69fd4/extract/0.log" Jan 29 16:04:42 crc kubenswrapper[4757]: I0129 16:04:42.156534 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-79f547bdd5-7bg8k_629b88f8-504a-4e19-914a-7359c131deb2/manager/0.log" Jan 29 16:04:42 crc kubenswrapper[4757]: I0129 16:04:42.213553 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-858d89fd-hf2f8_2db120e3-48a1-46c6-9d75-9e60012dcff4/manager/0.log" Jan 29 16:04:42 crc kubenswrapper[4757]: I0129 16:04:42.387165 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-dd77988f8-h7w6l_dc003609-336a-4cc2-a0fa-e3cd693a803d/manager/0.log" Jan 29 16:04:42 crc kubenswrapper[4757]: I0129 16:04:42.472926 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-f8c4db9df-76jqr_dc96ab98-0882-4c4c-8011-642f5da0ce8d/manager/0.log" Jan 29 16:04:42 crc kubenswrapper[4757]: I0129 16:04:42.622085 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-d8b84fbc-qrdfv_0ae0f41a-2010-4578-a849-a47110a5cad7/manager/0.log" Jan 29 16:04:42 crc kubenswrapper[4757]: I0129 16:04:42.692302 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-s5px2_edc8f287-a4c1-4558-b279-5159e135e838/manager/0.log" Jan 29 16:04:42 crc kubenswrapper[4757]: I0129 16:04:42.909159 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-qzz5n_5d2d32e1-adbe-4b24-bd98-0e51a52283f5/manager/0.log" Jan 29 16:04:42 crc kubenswrapper[4757]: I0129 16:04:42.949874 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-866c9d5b98-tbvmq_eb034926-25ee-4735-a9c4-407c7cd152a4/manager/0.log" Jan 29 16:04:43 crc kubenswrapper[4757]: I0129 16:04:43.033389 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-8ccc8547b-jh2fm_e9b2ed23-04f3-479f-870f-10f54f6ecab9/manager/0.log" Jan 29 16:04:43 crc kubenswrapper[4757]: I0129 16:04:43.181502 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-76c896469f-lflf2_d75a2490-77f1-41f0-b9c5-efcc7a2e520c/manager/0.log" Jan 29 16:04:43 crc kubenswrapper[4757]: I0129 16:04:43.223550 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-2kd46_635077c8-931b-4bda-b7dc-117279b97a5e/manager/0.log" Jan 29 16:04:43 crc kubenswrapper[4757]: I0129 16:04:43.414762 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c7cc6ff45-gpkbd_0180bde3-8b8c-4ffe-a5d2-cc39199feb28/manager/0.log" Jan 29 16:04:43 crc kubenswrapper[4757]: I0129 16:04:43.474633 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-68cb478976-5rfk2_a5549d49-38a8-4441-8200-6381ddf682b6/manager/0.log" Jan 29 16:04:43 crc kubenswrapper[4757]: I0129 16:04:43.619305 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68f8cb846c-kng6x_5590a40a-b378-4912-881d-68b46fb6564d/manager/0.log" Jan 29 16:04:43 crc kubenswrapper[4757]: I0129 16:04:43.713023 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4d7wvtj_5297dfef-4739-4076-99f2-462bf83c4b4b/manager/0.log" Jan 29 16:04:43 crc kubenswrapper[4757]: I0129 16:04:43.969206 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6dff856477-hgxdq_f40b9aa0-bc1b-49bf-a4ac-1ac90da4734e/operator/0.log" Jan 29 16:04:44 crc kubenswrapper[4757]: I0129 16:04:44.001133 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cbc58956b-jn7tc_e25703a2-f64f-43ff-b95f-3c9640fd9029/manager/0.log" Jan 29 16:04:44 crc kubenswrapper[4757]: I0129 16:04:44.147809 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xsflf_4a4af715-ebf7-4ad7-a1ff-7b3a4a90512a/registry-server/0.log" Jan 29 16:04:44 crc kubenswrapper[4757]: I0129 16:04:44.202883 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-82wnl_c7d33f5e-ce62-40e5-9400-c28c1cb50753/manager/0.log" Jan 29 16:04:44 crc kubenswrapper[4757]: I0129 16:04:44.383848 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-2zgqs_4ab1a5d0-6fc4-4081-85d6-047635db038e/manager/0.log" Jan 29 16:04:44 crc kubenswrapper[4757]: I0129 16:04:44.388586 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-fdwks_2c9cefc6-204f-42c8-b7a6-2c2776617a58/operator/0.log" Jan 29 16:04:44 crc kubenswrapper[4757]: I0129 16:04:44.584320 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6f7455757b-zfvjn_a921cf1b-0823-487b-9b4f-eb7eefca9cb5/manager/0.log" Jan 29 16:04:44 crc kubenswrapper[4757]: I0129 16:04:44.601369 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6cf8c44c7-grncr_1373c007-6220-40ca-a9a7-176d6779ff9e/manager/0.log" Jan 29 16:04:44 crc kubenswrapper[4757]: I0129 16:04:44.777343 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-dmc9f_0971e983-bccd-421c-8171-212672e8b8b7/manager/0.log" Jan 29 16:04:44 crc kubenswrapper[4757]: I0129 16:04:44.845759 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-59f4c7d7c4-6z2bh_fd851f0e-29f7-44b9-8c6e-f3b66a90c6b6/manager/0.log" Jan 29 16:04:45 crc kubenswrapper[4757]: I0129 16:04:45.396690 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:04:45 crc kubenswrapper[4757]: E0129 16:04:45.396949 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:04:57 crc kubenswrapper[4757]: I0129 16:04:57.400176 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:04:57 crc kubenswrapper[4757]: E0129 16:04:57.401011 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:05:03 crc kubenswrapper[4757]: I0129 16:05:03.727927 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qzc6g_b10bc118-1493-4055-a8c2-1a1b9aca7c91/control-plane-machine-set-operator/0.log" Jan 29 16:05:03 crc kubenswrapper[4757]: I0129 16:05:03.909831 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z9qzn_bab27dde-a537-445c-8d39-ad7479b66bcb/kube-rbac-proxy/0.log" Jan 29 16:05:04 crc kubenswrapper[4757]: I0129 16:05:04.010140 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z9qzn_bab27dde-a537-445c-8d39-ad7479b66bcb/machine-api-operator/0.log" Jan 29 16:05:11 crc kubenswrapper[4757]: I0129 16:05:11.396532 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:05:11 crc kubenswrapper[4757]: E0129 16:05:11.397252 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:05:15 crc kubenswrapper[4757]: I0129 16:05:15.865922 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-pdsxk_b8c4b13b-d870-4731-a95f-c0a3b7d1f896/cert-manager-controller/0.log" Jan 29 16:05:16 crc kubenswrapper[4757]: I0129 16:05:16.051280 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-wvl6f_c460cf1a-344b-4096-b5c7-187f4083d2c1/cert-manager-cainjector/0.log" Jan 29 16:05:16 crc kubenswrapper[4757]: I0129 16:05:16.131171 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-bljgl_a4dc59f9-ba72-46db-be8b-f83bf7c99b8a/cert-manager-webhook/0.log" Jan 29 16:05:26 crc kubenswrapper[4757]: I0129 16:05:26.396220 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:05:26 crc kubenswrapper[4757]: E0129 16:05:26.397001 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:05:28 crc kubenswrapper[4757]: I0129 16:05:28.067240 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-dbhgn_d7b21031-5a8e-4894-b583-c98cfd281944/nmstate-console-plugin/0.log" Jan 29 16:05:28 crc kubenswrapper[4757]: I0129 16:05:28.258136 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-c8zb9_a192d652-3d56-4191-908d-6f0241a07573/nmstate-handler/0.log" Jan 29 16:05:28 crc kubenswrapper[4757]: I0129 16:05:28.291460 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nzjdc_27377a37-8829-4efd-9df9-4804bc4689fc/kube-rbac-proxy/0.log" Jan 29 16:05:28 crc kubenswrapper[4757]: I0129 16:05:28.328623 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-nzjdc_27377a37-8829-4efd-9df9-4804bc4689fc/nmstate-metrics/0.log" Jan 29 16:05:28 crc kubenswrapper[4757]: I0129 16:05:28.516732 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-l49b9_0c80f85e-cab4-4177-800e-0fb5f301c838/nmstate-operator/0.log" Jan 29 16:05:28 crc kubenswrapper[4757]: I0129 16:05:28.600760 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-kgkvx_6877b102-1cc7-4306-93db-567d7f162a2a/nmstate-webhook/0.log" Jan 29 16:05:40 crc kubenswrapper[4757]: I0129 16:05:40.396765 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:05:40 crc kubenswrapper[4757]: E0129 16:05:40.399063 4757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-45q8t_openshift-machine-config-operator(f453676a-fbf0-4159-8a5a-04c0138b42c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" Jan 29 16:05:50 crc kubenswrapper[4757]: I0129 16:05:50.272496 4757 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-6prjx" podUID="96b41b6b-3fb0-4a49-9ca5-d220053e2aa3" containerName="registry-server" probeResult="failure" output=< Jan 29 16:05:50 crc kubenswrapper[4757]: timeout: failed to connect service ":50051" within 1s Jan 29 16:05:50 crc kubenswrapper[4757]: > Jan 29 16:05:50 crc kubenswrapper[4757]: I0129 16:05:50.289002 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-6prjx" podUID="96b41b6b-3fb0-4a49-9ca5-d220053e2aa3" containerName="registry-server" probeResult="failure" output=< Jan 29 16:05:50 crc kubenswrapper[4757]: timeout: failed to connect service ":50051" within 1s Jan 29 16:05:50 crc kubenswrapper[4757]: > Jan 29 16:05:52 crc kubenswrapper[4757]: I0129 16:05:52.396464 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" Jan 29 16:05:53 crc kubenswrapper[4757]: I0129 16:05:53.204355 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"af33411d3718d6cd507791a00552702d5fed9417b9e9585eb26766e4db17f230"} Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.055285 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sll65_31de118f-e4a8-488b-91a9-470c6cdc900c/kube-rbac-proxy/0.log" Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.123161 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-sll65_31de118f-e4a8-488b-91a9-470c6cdc900c/controller/0.log" Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.212342 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/cp-frr-files/0.log" Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.446635 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/cp-reloader/0.log" Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.475581 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/cp-reloader/0.log" Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.475821 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/cp-frr-files/0.log" Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.525768 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/cp-metrics/0.log" Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.679138 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/cp-metrics/0.log" Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.688750 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/cp-frr-files/0.log" Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.688751 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/cp-reloader/0.log" Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.731060 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/cp-metrics/0.log" Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.939142 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/cp-frr-files/0.log" Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.963605 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/cp-metrics/0.log" Jan 29 16:05:56 crc kubenswrapper[4757]: I0129 16:05:56.980819 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/cp-reloader/0.log" Jan 29 16:05:57 crc kubenswrapper[4757]: I0129 16:05:57.018887 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/controller/0.log" Jan 29 16:05:57 crc kubenswrapper[4757]: I0129 16:05:57.192194 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/frr-metrics/0.log" Jan 29 16:05:57 crc kubenswrapper[4757]: I0129 16:05:57.213198 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/kube-rbac-proxy/0.log" Jan 29 16:05:57 crc kubenswrapper[4757]: I0129 16:05:57.279389 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/kube-rbac-proxy-frr/0.log" Jan 29 16:05:57 crc kubenswrapper[4757]: I0129 16:05:57.377880 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/frr/0.log" Jan 29 16:05:57 crc kubenswrapper[4757]: I0129 16:05:57.501003 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9dgf4_6bf723f3-fad1-4294-824a-97b5c64953d5/reloader/0.log" Jan 29 16:05:57 crc kubenswrapper[4757]: I0129 16:05:57.554386 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-w97wd_200c0920-028c-4895-a093-edf9ee940c1f/frr-k8s-webhook-server/0.log" Jan 29 16:05:57 crc kubenswrapper[4757]: I0129 16:05:57.764570 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6d644c45b7-tjdph_8481f32c-d659-4dbb-9ddf-962d17346afc/webhook-server/0.log" Jan 29 16:05:57 crc kubenswrapper[4757]: I0129 16:05:57.783491 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5c94d76d46-599j4_b1bec22e-bc28-4615-b6f8-e639da353268/manager/0.log" Jan 29 16:05:57 crc kubenswrapper[4757]: I0129 16:05:57.933528 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6xltj_e131102b-c200-45ff-a236-9b2cd0435f88/kube-rbac-proxy/0.log" Jan 29 16:05:58 crc kubenswrapper[4757]: I0129 16:05:58.076569 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6xltj_e131102b-c200-45ff-a236-9b2cd0435f88/speaker/0.log" Jan 29 16:06:12 crc kubenswrapper[4757]: I0129 16:06:12.042814 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc_15b358ad-9ec6-457c-8876-9d3d7924e631/util/0.log" Jan 29 16:06:12 crc kubenswrapper[4757]: I0129 16:06:12.421006 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc_15b358ad-9ec6-457c-8876-9d3d7924e631/util/0.log" Jan 29 16:06:12 crc kubenswrapper[4757]: I0129 16:06:12.431125 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc_15b358ad-9ec6-457c-8876-9d3d7924e631/pull/0.log" Jan 29 16:06:12 crc kubenswrapper[4757]: I0129 16:06:12.443831 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc_15b358ad-9ec6-457c-8876-9d3d7924e631/pull/0.log" Jan 29 16:06:12 crc kubenswrapper[4757]: I0129 16:06:12.681771 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc_15b358ad-9ec6-457c-8876-9d3d7924e631/pull/0.log" Jan 29 16:06:12 crc kubenswrapper[4757]: I0129 16:06:12.722843 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc_15b358ad-9ec6-457c-8876-9d3d7924e631/util/0.log" Jan 29 16:06:12 crc kubenswrapper[4757]: I0129 16:06:12.724868 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczzqgc_15b358ad-9ec6-457c-8876-9d3d7924e631/extract/0.log" Jan 29 16:06:12 crc kubenswrapper[4757]: I0129 16:06:12.883383 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz_3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2/util/0.log" Jan 29 16:06:13 crc kubenswrapper[4757]: I0129 16:06:13.123279 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz_3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2/pull/0.log" Jan 29 16:06:13 crc kubenswrapper[4757]: I0129 16:06:13.168053 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz_3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2/util/0.log" Jan 29 16:06:13 crc kubenswrapper[4757]: I0129 16:06:13.168244 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz_3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2/pull/0.log" Jan 29 16:06:13 crc kubenswrapper[4757]: I0129 16:06:13.325155 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz_3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2/util/0.log" Jan 29 16:06:13 crc kubenswrapper[4757]: I0129 16:06:13.373042 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz_3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2/extract/0.log" Jan 29 16:06:13 crc kubenswrapper[4757]: I0129 16:06:13.434518 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhlcz_3a6f611a-8f0d-45a7-a1f4-75cb85eb65a2/pull/0.log" Jan 29 16:06:13 crc kubenswrapper[4757]: I0129 16:06:13.614014 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2cq2s_865d1515-2b66-4b6e-b670-d01e37c88cac/extract-utilities/0.log" Jan 29 16:06:13 crc kubenswrapper[4757]: I0129 16:06:13.761065 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2cq2s_865d1515-2b66-4b6e-b670-d01e37c88cac/extract-utilities/0.log" Jan 29 16:06:13 crc kubenswrapper[4757]: I0129 16:06:13.798715 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2cq2s_865d1515-2b66-4b6e-b670-d01e37c88cac/extract-content/0.log" Jan 29 16:06:13 crc kubenswrapper[4757]: I0129 16:06:13.807365 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2cq2s_865d1515-2b66-4b6e-b670-d01e37c88cac/extract-content/0.log" Jan 29 16:06:14 crc kubenswrapper[4757]: I0129 16:06:14.025752 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2cq2s_865d1515-2b66-4b6e-b670-d01e37c88cac/extract-utilities/0.log" Jan 29 16:06:14 crc kubenswrapper[4757]: I0129 16:06:14.052829 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2cq2s_865d1515-2b66-4b6e-b670-d01e37c88cac/extract-content/0.log" Jan 29 16:06:14 crc kubenswrapper[4757]: I0129 16:06:14.376528 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2cq2s_865d1515-2b66-4b6e-b670-d01e37c88cac/registry-server/0.log" Jan 29 16:06:14 crc kubenswrapper[4757]: I0129 16:06:14.620643 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-slm8x_4f737994-cd39-4543-ab57-9591a9322823/extract-utilities/0.log" Jan 29 16:06:14 crc kubenswrapper[4757]: I0129 16:06:14.782993 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-slm8x_4f737994-cd39-4543-ab57-9591a9322823/extract-utilities/0.log" Jan 29 16:06:14 crc kubenswrapper[4757]: I0129 16:06:14.815531 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-slm8x_4f737994-cd39-4543-ab57-9591a9322823/extract-content/0.log" Jan 29 16:06:14 crc kubenswrapper[4757]: I0129 16:06:14.867541 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-slm8x_4f737994-cd39-4543-ab57-9591a9322823/extract-content/0.log" Jan 29 16:06:15 crc kubenswrapper[4757]: I0129 16:06:15.055322 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-slm8x_4f737994-cd39-4543-ab57-9591a9322823/extract-utilities/0.log" Jan 29 16:06:15 crc kubenswrapper[4757]: I0129 16:06:15.123587 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-slm8x_4f737994-cd39-4543-ab57-9591a9322823/extract-content/0.log" Jan 29 16:06:15 crc kubenswrapper[4757]: I0129 16:06:15.268092 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4bj76_c3ae448c-6e33-42e9-bc9b-e909525820fb/marketplace-operator/0.log" Jan 29 16:06:15 crc kubenswrapper[4757]: I0129 16:06:15.462092 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-slm8x_4f737994-cd39-4543-ab57-9591a9322823/registry-server/0.log" Jan 29 16:06:15 crc kubenswrapper[4757]: I0129 16:06:15.466233 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cmzn_12bc2599-1396-4296-b78a-d37850977495/extract-utilities/0.log" Jan 29 16:06:15 crc kubenswrapper[4757]: I0129 16:06:15.763851 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cmzn_12bc2599-1396-4296-b78a-d37850977495/extract-utilities/0.log" Jan 29 16:06:15 crc kubenswrapper[4757]: I0129 16:06:15.840419 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cmzn_12bc2599-1396-4296-b78a-d37850977495/extract-content/0.log" Jan 29 16:06:15 crc kubenswrapper[4757]: I0129 16:06:15.879101 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cmzn_12bc2599-1396-4296-b78a-d37850977495/extract-content/0.log" Jan 29 16:06:16 crc kubenswrapper[4757]: I0129 16:06:16.042705 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cmzn_12bc2599-1396-4296-b78a-d37850977495/extract-utilities/0.log" Jan 29 16:06:16 crc kubenswrapper[4757]: I0129 16:06:16.149366 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cmzn_12bc2599-1396-4296-b78a-d37850977495/extract-content/0.log" Jan 29 16:06:16 crc kubenswrapper[4757]: I0129 16:06:16.234887 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7cmzn_12bc2599-1396-4296-b78a-d37850977495/registry-server/0.log" Jan 29 16:06:16 crc kubenswrapper[4757]: I0129 16:06:16.383423 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6prjx_96b41b6b-3fb0-4a49-9ca5-d220053e2aa3/extract-utilities/0.log" Jan 29 16:06:16 crc kubenswrapper[4757]: I0129 16:06:16.648111 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6prjx_96b41b6b-3fb0-4a49-9ca5-d220053e2aa3/extract-content/0.log" Jan 29 16:06:16 crc kubenswrapper[4757]: I0129 16:06:16.686107 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6prjx_96b41b6b-3fb0-4a49-9ca5-d220053e2aa3/extract-content/0.log" Jan 29 16:06:16 crc kubenswrapper[4757]: I0129 16:06:16.720040 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6prjx_96b41b6b-3fb0-4a49-9ca5-d220053e2aa3/extract-utilities/0.log" Jan 29 16:06:16 crc kubenswrapper[4757]: I0129 16:06:16.889885 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6prjx_96b41b6b-3fb0-4a49-9ca5-d220053e2aa3/extract-utilities/0.log" Jan 29 16:06:16 crc kubenswrapper[4757]: I0129 16:06:16.932518 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6prjx_96b41b6b-3fb0-4a49-9ca5-d220053e2aa3/extract-content/0.log" Jan 29 16:06:17 crc kubenswrapper[4757]: I0129 16:06:17.249891 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6prjx_96b41b6b-3fb0-4a49-9ca5-d220053e2aa3/registry-server/0.log" Jan 29 16:07:34 crc kubenswrapper[4757]: I0129 16:07:34.902575 4757 generic.go:334] "Generic (PLEG): container finished" podID="48f6b0d3-3584-4888-a995-05cd919020b5" containerID="c3ef2760ef914cad827c349033de933b671679fd6264994970eb75776640f2fd" exitCode=0 Jan 29 16:07:34 crc kubenswrapper[4757]: I0129 16:07:34.902675 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g497m/must-gather-tq6qx" event={"ID":"48f6b0d3-3584-4888-a995-05cd919020b5","Type":"ContainerDied","Data":"c3ef2760ef914cad827c349033de933b671679fd6264994970eb75776640f2fd"} Jan 29 16:07:34 crc kubenswrapper[4757]: I0129 16:07:34.903590 4757 scope.go:117] "RemoveContainer" containerID="c3ef2760ef914cad827c349033de933b671679fd6264994970eb75776640f2fd" Jan 29 16:07:35 crc kubenswrapper[4757]: I0129 16:07:35.914680 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g497m_must-gather-tq6qx_48f6b0d3-3584-4888-a995-05cd919020b5/gather/0.log" Jan 29 16:07:43 crc kubenswrapper[4757]: I0129 16:07:43.854215 4757 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-g497m/must-gather-tq6qx"] Jan 29 16:07:43 crc kubenswrapper[4757]: I0129 16:07:43.855771 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-g497m/must-gather-tq6qx" podUID="48f6b0d3-3584-4888-a995-05cd919020b5" containerName="copy" containerID="cri-o://38a4d990d224d2b754227406b253b9175fd6ec2d8acfab524ce0e5fc1254ba1b" gracePeriod=2 Jan 29 16:07:43 crc kubenswrapper[4757]: I0129 16:07:43.859381 4757 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-g497m/must-gather-tq6qx"] Jan 29 16:07:44 crc kubenswrapper[4757]: I0129 16:07:44.395432 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g497m_must-gather-tq6qx_48f6b0d3-3584-4888-a995-05cd919020b5/copy/0.log" Jan 29 16:07:44 crc kubenswrapper[4757]: I0129 16:07:44.396662 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g497m/must-gather-tq6qx" Jan 29 16:07:44 crc kubenswrapper[4757]: I0129 16:07:44.414631 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/48f6b0d3-3584-4888-a995-05cd919020b5-must-gather-output\") pod \"48f6b0d3-3584-4888-a995-05cd919020b5\" (UID: \"48f6b0d3-3584-4888-a995-05cd919020b5\") " Jan 29 16:07:44 crc kubenswrapper[4757]: I0129 16:07:44.414732 4757 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vwlc\" (UniqueName: \"kubernetes.io/projected/48f6b0d3-3584-4888-a995-05cd919020b5-kube-api-access-7vwlc\") pod \"48f6b0d3-3584-4888-a995-05cd919020b5\" (UID: \"48f6b0d3-3584-4888-a995-05cd919020b5\") " Jan 29 16:07:44 crc kubenswrapper[4757]: I0129 16:07:44.423726 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f6b0d3-3584-4888-a995-05cd919020b5-kube-api-access-7vwlc" (OuterVolumeSpecName: "kube-api-access-7vwlc") pod "48f6b0d3-3584-4888-a995-05cd919020b5" (UID: "48f6b0d3-3584-4888-a995-05cd919020b5"). InnerVolumeSpecName "kube-api-access-7vwlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:07:44 crc kubenswrapper[4757]: I0129 16:07:44.506959 4757 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48f6b0d3-3584-4888-a995-05cd919020b5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "48f6b0d3-3584-4888-a995-05cd919020b5" (UID: "48f6b0d3-3584-4888-a995-05cd919020b5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:07:44 crc kubenswrapper[4757]: I0129 16:07:44.516698 4757 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/48f6b0d3-3584-4888-a995-05cd919020b5-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 16:07:44 crc kubenswrapper[4757]: I0129 16:07:44.516943 4757 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vwlc\" (UniqueName: \"kubernetes.io/projected/48f6b0d3-3584-4888-a995-05cd919020b5-kube-api-access-7vwlc\") on node \"crc\" DevicePath \"\"" Jan 29 16:07:44 crc kubenswrapper[4757]: I0129 16:07:44.986227 4757 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g497m_must-gather-tq6qx_48f6b0d3-3584-4888-a995-05cd919020b5/copy/0.log" Jan 29 16:07:44 crc kubenswrapper[4757]: I0129 16:07:44.987286 4757 generic.go:334] "Generic (PLEG): container finished" podID="48f6b0d3-3584-4888-a995-05cd919020b5" containerID="38a4d990d224d2b754227406b253b9175fd6ec2d8acfab524ce0e5fc1254ba1b" exitCode=143 Jan 29 16:07:44 crc kubenswrapper[4757]: I0129 16:07:44.987361 4757 scope.go:117] "RemoveContainer" containerID="38a4d990d224d2b754227406b253b9175fd6ec2d8acfab524ce0e5fc1254ba1b" Jan 29 16:07:44 crc kubenswrapper[4757]: I0129 16:07:44.987372 4757 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g497m/must-gather-tq6qx" Jan 29 16:07:45 crc kubenswrapper[4757]: I0129 16:07:45.009072 4757 scope.go:117] "RemoveContainer" containerID="c3ef2760ef914cad827c349033de933b671679fd6264994970eb75776640f2fd" Jan 29 16:07:45 crc kubenswrapper[4757]: I0129 16:07:45.093456 4757 scope.go:117] "RemoveContainer" containerID="38a4d990d224d2b754227406b253b9175fd6ec2d8acfab524ce0e5fc1254ba1b" Jan 29 16:07:45 crc kubenswrapper[4757]: E0129 16:07:45.094824 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38a4d990d224d2b754227406b253b9175fd6ec2d8acfab524ce0e5fc1254ba1b\": container with ID starting with 38a4d990d224d2b754227406b253b9175fd6ec2d8acfab524ce0e5fc1254ba1b not found: ID does not exist" containerID="38a4d990d224d2b754227406b253b9175fd6ec2d8acfab524ce0e5fc1254ba1b" Jan 29 16:07:45 crc kubenswrapper[4757]: I0129 16:07:45.094885 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38a4d990d224d2b754227406b253b9175fd6ec2d8acfab524ce0e5fc1254ba1b"} err="failed to get container status \"38a4d990d224d2b754227406b253b9175fd6ec2d8acfab524ce0e5fc1254ba1b\": rpc error: code = NotFound desc = could not find container \"38a4d990d224d2b754227406b253b9175fd6ec2d8acfab524ce0e5fc1254ba1b\": container with ID starting with 38a4d990d224d2b754227406b253b9175fd6ec2d8acfab524ce0e5fc1254ba1b not found: ID does not exist" Jan 29 16:07:45 crc kubenswrapper[4757]: I0129 16:07:45.094922 4757 scope.go:117] "RemoveContainer" containerID="c3ef2760ef914cad827c349033de933b671679fd6264994970eb75776640f2fd" Jan 29 16:07:45 crc kubenswrapper[4757]: E0129 16:07:45.095205 4757 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3ef2760ef914cad827c349033de933b671679fd6264994970eb75776640f2fd\": container with ID starting with c3ef2760ef914cad827c349033de933b671679fd6264994970eb75776640f2fd not found: ID does not exist" containerID="c3ef2760ef914cad827c349033de933b671679fd6264994970eb75776640f2fd" Jan 29 16:07:45 crc kubenswrapper[4757]: I0129 16:07:45.095251 4757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3ef2760ef914cad827c349033de933b671679fd6264994970eb75776640f2fd"} err="failed to get container status \"c3ef2760ef914cad827c349033de933b671679fd6264994970eb75776640f2fd\": rpc error: code = NotFound desc = could not find container \"c3ef2760ef914cad827c349033de933b671679fd6264994970eb75776640f2fd\": container with ID starting with c3ef2760ef914cad827c349033de933b671679fd6264994970eb75776640f2fd not found: ID does not exist" Jan 29 16:07:45 crc kubenswrapper[4757]: I0129 16:07:45.409138 4757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f6b0d3-3584-4888-a995-05cd919020b5" path="/var/lib/kubelet/pods/48f6b0d3-3584-4888-a995-05cd919020b5/volumes" Jan 29 16:08:17 crc kubenswrapper[4757]: I0129 16:08:17.604521 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:08:17 crc kubenswrapper[4757]: I0129 16:08:17.605162 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:08:47 crc kubenswrapper[4757]: I0129 16:08:47.605166 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:08:47 crc kubenswrapper[4757]: I0129 16:08:47.605765 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:09:17 crc kubenswrapper[4757]: I0129 16:09:17.605210 4757 patch_prober.go:28] interesting pod/machine-config-daemon-45q8t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:09:17 crc kubenswrapper[4757]: I0129 16:09:17.605850 4757 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:09:17 crc kubenswrapper[4757]: I0129 16:09:17.605901 4757 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" Jan 29 16:09:17 crc kubenswrapper[4757]: I0129 16:09:17.606581 4757 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af33411d3718d6cd507791a00552702d5fed9417b9e9585eb26766e4db17f230"} pod="openshift-machine-config-operator/machine-config-daemon-45q8t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:09:17 crc kubenswrapper[4757]: I0129 16:09:17.606645 4757 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" podUID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerName="machine-config-daemon" containerID="cri-o://af33411d3718d6cd507791a00552702d5fed9417b9e9585eb26766e4db17f230" gracePeriod=600 Jan 29 16:09:18 crc kubenswrapper[4757]: I0129 16:09:18.671174 4757 generic.go:334] "Generic (PLEG): container finished" podID="f453676a-fbf0-4159-8a5a-04c0138b42c1" containerID="af33411d3718d6cd507791a00552702d5fed9417b9e9585eb26766e4db17f230" exitCode=0 Jan 29 16:09:18 crc kubenswrapper[4757]: I0129 16:09:18.671295 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerDied","Data":"af33411d3718d6cd507791a00552702d5fed9417b9e9585eb26766e4db17f230"} Jan 29 16:09:18 crc kubenswrapper[4757]: I0129 16:09:18.671593 4757 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-45q8t" event={"ID":"f453676a-fbf0-4159-8a5a-04c0138b42c1","Type":"ContainerStarted","Data":"56134af6a4383aca114356fe4ffa785faa1262c1e910aeba425d37e2eb0855de"} Jan 29 16:09:18 crc kubenswrapper[4757]: I0129 16:09:18.671614 4757 scope.go:117] "RemoveContainer" containerID="341b0a285a83aaaeda16d772469598820be474dabd9606ec022f049ea4f0243d" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136703217024452 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136703220017361 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136673363016522 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136673364015473 5ustar corecore